Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Davanum Srinivas
/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1368942] Re: lxc test failure under osx

2014-09-25 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368942

Title:
  lxc test failure under osx

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Heres the stack trace from the following test:
  
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_driver.py, line 9231, in 
test_create_propagates_exceptions
  instance, None)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 420, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 431, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 481, in _matchHelper
  mismatch = matcher.match(matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  mismatch = matcher.match(matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 412, in match
  reraise(*matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  result = matchee()
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 955, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File nova/virt/libvirt/driver.py, line 4229, in _create_domain_and_network
  disk_info):
File 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py,
 line 17, in __enter__
  return self.gen.next()
File nova/virt/libvirt/driver.py, line 4125, in _lxc_disk_handler
  self._create_domain_setup_lxc(instance, block_device_info, disk_info)
File nova/virt/libvirt/driver.py, line 4077, in _create_domain_setup_lxc
  use_cow=use_cow)
File nova/virt/disk/api.py, line 385, in setup_container
  img = _DiskImage(image=image, use_cow=use_cow, mount_dir=container_dir)
File nova/virt/disk/api.py, line 252, in __init__
  device = self._device_for_path(mount_dir)
File nova/virt/disk/api.py, line 260, in _device_for_path
  with open(/proc/mounts, 'r') as ifp:
  IOError: [Errno 2] No such file or directory: '/proc/mounts'
  Ran 1 tests in 1.172s (-30.247s)
  FAILED (id=12, failures=1 (+1))
  error: testr failed (1)
  ERROR: InvocationError: '/Users/dims/openstack/nova/.tox/py27/bin/python -m 
nova.openstack.common.lockutils python setup.py test --slowest 
--testr-args=nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions'
  
__
 summary 
___
  ERROR:   py27: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374210] [NEW] VimExceptions need to support i18n objects

2014-09-25 Thread Davanum Srinivas (DIMS)
Public bug reported:

When lazy is enabled the i18n translation object does not support
str() which causes failures like:
  UnicodeError: Message objects do not support str() because they may
  contain non-ascii characters. Please use unicode() or translate()
  instead.

** Affects: nova
 Importance: Medium
 Assignee: James Carey (jecarey)
 Status: Confirmed

** Affects: oslo.vmware
 Importance: High
 Assignee: James Carey (jecarey)
 Status: Confirmed


** Tags: vmware

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: vmware

** Changed in: nova
   Status: New = Confirmed

** Changed in: oslo.vmware
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: oslo.vmware
   Importance: Undecided = High

** Changed in: nova
 Assignee: (unassigned) = James Carey (jecarey)

** Changed in: oslo.vmware
 Assignee: (unassigned) = James Carey (jecarey)

** Changed in: oslo.vmware
Milestone: None = next-kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374210

Title:
  VimExceptions need to support i18n objects

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Confirmed

Bug description:
  When lazy is enabled the i18n translation object does not support
  str() which causes failures like:
UnicodeError: Message objects do not support str() because they may
contain non-ascii characters. Please use unicode() or translate()
instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374000] Re: VMWare: file writer class uses unsafe SSL connection

2014-09-25 Thread Davanum Srinivas (DIMS)
Same code is also in oslo/vmware/rw_handles.py


** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374000

Title:
  VMWare: file writer class uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  VMwareHTTPWriteFile uses httplib.HTTPSConnection objects. In Python
  2.x those do not perform CA checks so client connections are
  vulnerable to MiM attacks.

  This is the specific version of
  https://bugs.launchpad.net/nova/+bug/1188189

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] Could we please use 1.4.1 for oslo.messaging *now*? (was: Oslo final releases ready)

2014-09-23 Thread Davanum Srinivas
Zigo,

Ouch! Can you please open a bug in oslo.messaging, i'll mark it critical

thanks,
dims

On Tue, Sep 23, 2014 at 4:35 AM, Thomas Goirand z...@debian.org wrote:
 On 09/18/2014 10:04 PM, Doug Hellmann wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.

 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)

 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!

 Doug

 Doug,

 Here in Debian, I have a *huge* mess with versionning with oslo.messaging.

 tl;dr: Because of that version number mess, please add a tag 1.4.1 to
 oslo.messaging now and use it everywhere instead of 1.4.0.

 Longer version:

 What happened is that Chuck released a wrong version of Keystone (eg:
 the trunk rather than the stable branch). Therefore, I uploaded a
 version 1.4.0 beta version of olso.messaging in Debian Unstable/Jessie,
 because I thought the Icehouse version of Keystone needed it. (Sid /
 Jessie is supposed to keep Icehouse stuff only.)

 That would have been about fine, if only I didn't upgraded
 oslo.messaging to the last version in Sid, because I didn't want to keep
 a beta release in Jessie. Though this last version depends on
 oslo.config 1.4.0.0~a5, then probably even more.

 So I reverted the 1.4.0.0 upload in Debian Sid, by uploading version
 1.4.0.0+really+1.3.1, which as its name may suggest, really is a 1.3.1
 version (I did that to avoid having an EPOC and need to re-upload
 updates of all reverse dependencies of oslo.messaging). That's fine,
 we're covered for Sid/Jessie.

 But then, the Debian Experimental version of oslo.messaging is lower
 than the one in Sid/Jessie, so I have breakage there.

 If we declare a new 1.4.1, and have this fixed in our
 global-requirements.txt, then everything goes back in order for me, I
 get back on my feets. Otherwise, I'll have to deal with this, and make
 fake version numbers which will not match anything real released by
 OpenStack, which may lead to even more mistakes.

 So, could you please at least:
 - add a git tag 1.4.1 to oslo.messaging right now, matching 1.4.0

 This will make sure that nobody will use 1.4.1 again, and that I'm fine
 using this version number in Debian Experimental, which will be higher
 than the one in Sid.

 And then, optionally, it would help me if you could (but I can leave
 without it):
 - Use 1.4.1 for oslo.messaging in global-requirements.txt
 - Have every project that needs 1.4.0 bump to 1.4.1 as well

 This would be a lot less work than for me to declare an EPOC in the
 oslo.messaging package, and fix all reverse dependencies. The affected
 packages for Juno for me are:
 - ceilometer
 - cinder
 - designate
 - glance
 - heat
 - ironic
 - keystone
 - neutron
 - nova
 - oslo-config
 - oslo.rootwrap
 - oslo.i18n
 - python-pycadf

 I'd have to upload updates for all of them even if we use 1.4.1 instead
 of using an EPOC (eg: 1:1.4.0), but that's still much better for me to
 use 1.4.1 than an EPOC. EPOC are ugly (because not visible in file
 names) and confusing (it's easy to forget them), and non-reversible, so
 I'd like to avoid it if possible.

 I'm sorry for the mess and added work.
 Cheers,

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] adding James Carey to oslo-i18n-core

2014-09-23 Thread Davanum Srinivas
+1 from me.

-- dims

On Tue, Sep 23, 2014 at 5:03 PM, Doug Hellmann d...@doughellmann.com wrote:
 James Carey (jecarey) from IBM has done the 3rd most reviews of oslo.i18n 
 this cycle [1]. His feedback has been useful, and I think he would be a good 
 addition to the team for maintaining oslo.i18n.

 Let me know what you think, please.

 Doug

 [1] http://stackalytics.com/?module=oslo.i18n
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-18 Thread Davanum Srinivas
://git.openstack.org/cgit/openstack/oslo.messaging/tree/doc/source
 
 
  2) a list of the critical bugs that need to be fixed + any existing
  patches associated with those bugs, so they can be reviewed early in
  kilo
 
  This blocks operation of nova+neutron environements:
 
  https://bugs.launchpad.net/oslo.messaging/+bug/1301723
   Summary: Message was sent to wrong node with zmq as rpc_backend
   Patch: https://review.openstack.org/84938
 
  Also notifcations are effectively unimplemented which prevents use
  with Ceilometer so I'd also add:
 
  https://bugs.launchpad.net/oslo.messaging/+bug/1368154
   Summary: https://bugs.launchpad.net/oslo.messaging/+bug/
   Patch: https://review.openstack.org/120745
 
  That’s a good list, and shorter than I expected. I have added these bugs
  to the next-kilo milestone.
 
 
  3) an analysis of what it would take to be able to run functional
  tests for zeromq on our CI infrastructure, not necessarily the full
  tempest run or devstack-gate job, probably functional tests we place
  in the tree with the driver (we will be doing this for all of the
  drivers) + besides writing new functional tests, we need to bring the
  unit tests for zeromq into the oslo.messaging repository
 
  Kapil Thangavelu started work on both functional tests for the ZMQ
  driver last week; the output from the sprint is here:
 
 https://github.com/ostack-musketeers/oslo.messaging
 
  it covers the ZMQ driver (including messaging through the zmq-receiver
  proxy) and the associated MatchMakers (local, ring, redis) at a
  varying levels of coverage, but I feel it moves things in the right
  direction - Kapil's going to raise a review for this in the next
  couple of days.
 
  Doug - has any structure been agreed within the oslo.messaging tree
  for unit/functional test splits? Right now we have them all in one
  place.
 
  I think we will want them split up, but we don’t have an agreed existing
  structure for that. I would like to see a test framework of some sort that
  defines the tests in a way that can be used to run the same functional for
  all of the drivers as separate jobs (with appropriate hooks for ensuring 
  the
  needed services are running, etc.). Setting that up warrants its own spec,
  because there are going to be quite a few details to work out. We will also
  need to participate in the larger conversation about how to set up those
  functional test jobs to be consistent with the other projects.
 
 
  Edward Hope-Morley also worked on getting devstack working with ZMQ:
 
 https://github.com/ostack-musketeers/devstack
 
  that's still WIP but again we'll get any changes submitted for review
  ASAP.
 
  That’s good to have, but I don’t necessarily consider it a requirement
  for in-project functional tests.
 
 
  4) and some improvements that we would like to make longer term
 
  a) Connection re-use on outbound messaging avoiding the current tcp
  setup overhead for every sent message.  This may also bring further
  performance benefits due to underlying messaging batching in ZMQ.
 
  This sounds like it would be a good thing to do, but making what we have
  work correctly and testing it feels more important for now.
 
 
  b) Moving from tcp PUSH/PULL sockets between servers to DEALER/DEALER
  (or something similar) to allow for heartbeating and more immediate
  failure detection
 
  I would need to understand how much of a rewrite that represents before
  commenting further.
 
 
  c) Crypto support
 
  There are some other discussions about adding crypto to messaging, and I
  hope we can do that without having to touch each driver, if possible.
 
 
  Cheers
 
  James
 
  --
  James Page
  Ubuntu and Debian Developer
  james.p...@ubuntu.com
  jamesp...@debian.org
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo final releases ready

2014-09-18 Thread Davanum Srinivas
w00t!

-- dims

On Thu, Sep 18, 2014 at 10:04 AM, Doug Hellmann d...@doughellmann.com wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.

 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)

 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-17 Thread Davanum Srinivas
I was trying request-ifying oslo.vmware and ran into this as well:
https://review.openstack.org/#/c/121956/

And we don't seem to have urllib3 in global-requirements either.
Should we do that first?

-- dims

On Wed, Sep 17, 2014 at 1:05 PM, Clint Byrum cl...@fewbar.com wrote:
 This is where Debian's one urllib3 to rule them all model fails in
 a modern fast paced world. Debian is arguably doing the right thing by
 pushing everyone to use one API, and one library, so that when that one
 library is found to be vulnerable to security problems, one update covers
 everyone. Also, this is an HTTP/HTTPS library.. so nobody can make the
 argument that security isn't paramount in this context.

 But we all know that the app store model has started to bleed down into
 backend applications, and now you just ship the virtualenv or docker
 container that has your app as you tested it, and if that means you're
 20 versions behind on urllib3, that's your problem, not the OS vendor's.

 I think it is _completely_ irresponsible of requests, a library, to
 embed another library. But I don't know if we can avoid making use of
 it if we are going to be exposed to objects that are attached to it.

 Anyway, Thomas, if you're going to send the mob with pitchforks and
 torches somewhere, I'd say send them to wherever requests makes its
 home. OpenStack is just buying their mutated product.

 Excerpts from Donald Stufft's message of 2014-09-17 08:22:48 -0700:
 Looking at the code on my phone it looks completely correct to use the 
 vendored copy here and it wouldn't actually work otherwise.

  On Sep 17, 2014, at 11:17 AM, Donald Stufft don...@stufft.io wrote:
 
  I don't know the specific situation but it's appropriate to do this if 
  you're using requests and wish to interact with the urllib3 that requests 
  is using.
 
  On Sep 17, 2014, at 11:15 AM, Thomas Goirand z...@debian.org wrote:
 
  Hi,
 
  I'm horrified by what I just found. I have just found out this in
  glanceclient:
 
  File bla/tests/test_ssl.py, line 19, in module
from requests.packages.urllib3 import poolmanager
  ImportError: No module named packages.urllib3
 
  Please *DO NOT* do this. Instead, please use urllib3 from ... urllib3.
  Not from requests. The fact that requests is embedding its own version
  of urllib3 is an heresy. In Debian, the embedded version of urllib3 is
  removed from requests.
 
  In Debian, we spend a lot of time to un-vendorize stuff, because
  that's a security nightmare. I don't want to have to patch all of
  OpenStack to do it there as well.
 
  And no, there's no good excuse here...
 
  Thomas Goirand (zigo)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Davanum Srinivas
+1 to Doug's comments.

On Wed, Sep 17, 2014 at 1:02 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com wrote:

 On 09/16/2014 11:55 PM, Ben Nemec wrote:
 Based on my reading of the wiki page about this it sounds like it should
 be a sub-project of the Storage program.  While it is targeted for use
 by multiple projects, it's pretty specific to interacting with Cinder,
 right?  If so, it seems like Oslo wouldn't be a good fit.  We'd just end
 up adding all of cinder-core to the project anyway. :-)

 +1 I think the same arguments and conclusions we had on glance-store
 make sense here. I'd probably go with having it under the Block Storage
 program.

 I agree. I’m sure we could find some Oslo contributors to give you advice 
 about APIs if you like, but I don’t think the library needs to be part of 
 Oslo to be reusable.

 Doug


 Flavio


 -Ben

 On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
 Hi Stackers!

 I'm working on moving Brick out of Cinder for K release.

 There're a lot of open questions for now:

   - Should we move it to oslo or somewhere on stackforge?
   - Better architecture of it to fit all Cinder and Nova requirements
   - etc.

 Before starting discussion, I've created some proof-of-concept to try it. I
 moved Brick to some lib named oslo.storage for testing only. It's only one
 of the possible solution to start work on it.

 All sources are aviable on GitHub [1], [2].

 [1] - I'm not sure that this place and name is good for it, it's just a 
 PoC.

 [1] https://github.com/e0ne/oslo.storage
 [2] https://github.com/e0ne/cinder/tree/brick - some tests still failed.

 Regards,
 Ivan Kolodyazhny

 On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny e...@e0ne.info wrote:

 Hi All!

 I would to start moving Cinder Brick [1] to oslo as was described on
 Cinder mid-cycle meetup [2]. Unfortunately I missed meetup so I want be
 sure that nobody started it and we are on the same page.

 According to the Juno 3 release, there was not enough time to discuss [3]
 on the latest Cinder weekly meeting and I would like to get some feedback
 from the all OpenStack community, so I propose to start this discussion on
 mailing list for all projects.

 I anybody didn't started it and it is useful at least for both Nova and
 Cinder I would to start this work according oslo guidelines [4] and
 creating needed blueprints to make it finished until Kilo 1 is over.



 [1] https://wiki.openstack.org/wiki/CinderBrick
 [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
 [3]
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
 [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Regards,
 Ivan Kolodyazhny.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

2014-09-15 Thread Davanum Srinivas
+1 to Rados for oslo.vmware team

-- dims

On Mon, Sep 15, 2014 at 12:37 PM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 I would like to propose Radoslav to be a core team member. Over the course
 of the J cycle he has been great with the reviews, bug fixes and updates to
 the project.
 Can the other core team members please update with your votes if you agree
 or not.
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-15 Thread Davanum Srinivas
Sean,

I have tabs opened to:
http://status.openstack.org/elastic-recheck/gate.html
http://status.openstack.org/elastic-recheck/data/uncategorized.html

and periodically catch up on openstack-qa on IRC as well, i just did
not realize this wsgi gate bug was hurting the gate this much.

So, could we somehow indicate (email? or one of the web pages above?)
where occassional helpers can watch and pitch in when needed.

thanks,
dims


On Mon, Sep 15, 2014 at 5:55 PM, Sean Dague s...@dague.net wrote:
 On 09/15/2014 05:52 PM, Brant Knudson wrote:


 On Mon, Sep 15, 2014 at 4:30 PM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:

 On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
  On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
  On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
  Just an observation from the last week or so...
 
  The biggest problem nova faces at the moment isn't code review 
 latency. Our
  biggest problem is failing to fix our bugs so that the gate is 
 reliable.
  The number of rechecks we've done in the last week to try and land 
 code is
  truly startling.
 
  I consider both problems to be pretty much equally as important. I 
 don't
  think solving review latency or test reliabilty in isolation is 
 enough to
  save Nova. We need to tackle both problems as a priority. I tried to 
 avoid
  getting into my concerns about testing in my mail on review team 
 bottlenecks
  since I think we should address the problems independantly / in 
 parallel.
 
  Agreed with this.  I don't think we can afford to ignore either one of 
 them.

 Yes, that was my point. I don't mind us debating how to rearrange
 hypervisor drivers. However, if we think that will solve all our
 problems we are confused.

 So, how do we get people to start taking bugs / gate failures more
 seriously?

 Michael


 What do you think about having an irc channel for working through gate
 bugs? I've always found looking at gate failures frustrating because I
 seem to be expected to work through these by myself, and maybe
 somebody's already looking at it or has more information that I don't
 know about. There have been times already where a gate bug that could
 have left everything broken for a while wound up fixed pretty quickly
 because we were able to find the right person hanging out in irc.
 Sometimes all it takes is for someone with the right knowledge to be
 there. A hypothetical exchange:

 rechecker: I got this error where the tempest-foo test failed ... http://...
 tempest-expert: That test calls the compute-bar nova API
 nova-expert: That API calls the network-baz neutron API
 neutron-expert: When you call that API you need to also call this other
 API to poll for it to be done... is nova doing that?
 nova-expert: Nope. Fix on the way.

 Honestly, the #openstack-qa channel is a completely appropriate place
 for that. Plus it already has a lot of the tempest experts.
 Realistically anyone that works on these kinds of fixes tend to be there.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] final review push for releases thursday

2014-09-15 Thread Davanum Srinivas
Doug,

One merged, Three in gate.

-- dims

On Mon, Sep 15, 2014 at 5:48 PM, Doug Hellmann d...@doughellmann.com wrote:
 We’re down to 2 bugs, both of which have patches up for review.

 James Carey has a fix for the decoding error we’re seeing in mask_password(). 
 It needs to land in oslo.utils and oslo-incubator:

 - utils: https://review.openstack.org/#/c/121657/
 - incubator: https://review.openstack.org/#/c/121632/

 Robert Collins has a fix for pbr breaking on tags that don’t look like 
 version numbers, with 1 dependency:

 - https://review.openstack.org/#/c/114403/
 - https://review.openstack.org/#/c/108271/ (dependency)

 If we knock these out Tuesday we can cut releases of the related libs to give 
 us a day or so with unit tests running before the final versions are tagged 
 on Thursday.

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1246308] Re: AttributeError in Redis Matchmaker ack_alive()

2014-09-15 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246308

Title:
  AttributeError in Redis Matchmaker ack_alive()

Status in OpenStack Compute (Nova):
  Won't Fix
Status in Messaging API for OpenStack:
  Triaged

Bug description:
  The attribute name while re-registering a topic should be self.host_topic 
instead of self.topic_host
  in 
https://github.com/openstack/nova/blob/master/nova/openstack/common/rpc/matchmaker_redis.py#L113

  Otherwise, the following error is seen while starting the conductor service:
  2013-10-30 18:02:32.849 ERROR nova.openstack.common.threadgroup [-] 
'MatchMakerRedis' object has no attribute 'topic_host'
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 117, in wait
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup x.wait()
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 49, in wait
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in 
  wait
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in swi
  tch
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in 
  main
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 448, in run_service
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 176, in start
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup 
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=False)
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/rpc/impl_zmq.py, line 578, in 
create_consumer
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup 
_get_matchmaker().register(topic, CONF.rpc_zmq_host)
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/rpc/matchmaker.py, line 207, in register
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup 
self.ack_alive(key, host)
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/rpc/matchmaker_redis.py, line 113, in 
ack_alive
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup 
self.register(self.topic_host[host], host)
  2013-10-30 18:02:32.849 TRACE nova.openstack.common.threadgroup 
AttributeError: 'MatchMakerRedis' object has no attribute 'topic_host'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[openstack-dev] oslo.vmware 0.6.0 released

2014-09-13 Thread Davanum Srinivas
The Oslo team is pleased to release version 0.6.0 of oslo.vmware.

This release includes:

$ git log --oneline --no-merges 0.5.0..0.6.0
f36cd7f Add DuplicateName exception
81ef9d4 Add 'details' property to VMwareDriverException
5571e9f Enable oslo.i18n for oslo.vmware
6c24953 Add API to enable calling module to register an exception
b72ab3e Imported Translations from Transifex
ffd9a6d Add docs target and generate api docs
4938dff Updated from global requirements
e2f0469 Work toward Python 3.4 support and testing
9c6a20e warn against sorting requirements
6c5e449 Add exception for TaskInProgress
d51fdbe Updated from global requirements
9273388 Refactoring to reduce noise in log files
c4437af Imported Translations from Transifex
a3f8146 Add missing session parameter to get_summary
74832f4 Updated from global requirements
d9ada2a Switch off caching to prevent cache poisoning by local attacker
abb4e82 Support for copying streamOptimized disk to file
e434d1b Add support for the DatastoreURL object
e5c22fa Add methods to the Datastore objects
d33c195 Imported Translations from Transifex
788e944 Add Pylint testenv environment
6316a6f Port the Datastore and DatastorePath objects
d76620b Log additional details of suds faults
9a9649f Fix seek and tell in BlockingQueue

Please report issues to the bug tracker: https://bugs.launchpad.net/oslo.vmware

-- dims

-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1368942] [NEW] lxc test failure under osx

2014-09-12 Thread Davanum Srinivas (DIMS)
Public bug reported:

Heres the stack trace from the following test:
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions

Traceback (most recent call last):
  File nova/tests/virt/libvirt/test_driver.py, line 9231, in 
test_create_propagates_exceptions
instance, None)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 420, in assertRaises
self.assertThat(our_callable, matcher)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 431, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 481, in _matchHelper
mismatch = matcher.match(matchee)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
mismatch = matcher.match(matchee)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 412, in match
reraise(*matchee)
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
result = matchee()
  File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 955, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File nova/virt/libvirt/driver.py, line 4229, in _create_domain_and_network
disk_info):
  File 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py,
 line 17, in __enter__
return self.gen.next()
  File nova/virt/libvirt/driver.py, line 4125, in _lxc_disk_handler
self._create_domain_setup_lxc(instance, block_device_info, disk_info)
  File nova/virt/libvirt/driver.py, line 4077, in _create_domain_setup_lxc
use_cow=use_cow)
  File nova/virt/disk/api.py, line 385, in setup_container
img = _DiskImage(image=image, use_cow=use_cow, mount_dir=container_dir)
  File nova/virt/disk/api.py, line 252, in __init__
device = self._device_for_path(mount_dir)
  File nova/virt/disk/api.py, line 260, in _device_for_path
with open(/proc/mounts, 'r') as ifp:
IOError: [Errno 2] No such file or directory: '/proc/mounts'
Ran 1 tests in 1.172s (-30.247s)
FAILED (id=12, failures=1 (+1))
error: testr failed (1)
ERROR: InvocationError: '/Users/dims/openstack/nova/.tox/py27/bin/python -m 
nova.openstack.common.lockutils python setup.py test --slowest 
--testr-args=nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions'
__
 summary 
___
ERROR:   py27: commands failed

** Affects: nova
 Importance: Low
 Status: New

** Changed in: nova
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368942

Title:
  lxc test failure under osx

Status in OpenStack Compute (Nova):
  New

Bug description:
  Heres the stack trace from the following test:
  
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_driver.py, line 9231, in 
test_create_propagates_exceptions
  instance, None)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 420, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 431, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 481, in _matchHelper
  mismatch = matcher.match(matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  mismatch = matcher.match(matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 412, in match
  reraise(*matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-11 Thread Davanum Srinivas
Rados,

personally, i'd want a human to do the +W. Also the critieria would
include a 3) which is the CI for the driver if applicable.

On Thu, Sep 11, 2014 at 9:53 AM, Radoslav Gerganov rgerga...@vmware.com wrote:
 On 09/11/2014 04:30 PM, Sean Dague wrote:

 On 09/11/2014 09:09 AM, Gary Kotton wrote:



 On 9/11/14, 2:55 PM, Thierry Carrez thie...@openstack.org wrote:

 Sean Dague wrote:

 [...]
 Why don't we start with let's clean up the virt interface and make it
 more sane, as I don't think there is any disagreement there. If it's
 going to take a cycle, it's going to take a cycle anyway (it will
 probably take 2 cycles, realistically, we always underestimate these
 things, remember when no-db-compute was going to be 1 cycle?). I don't
 see the need to actually decide here and now that the split is clearly
 at least 7 - 12 months away. A lot happens in the intervening time.


 Yes, that sounds like the logical next step. We can't split drivers
 without first doing that anyway. I still think people need smaller
 areas of work, as Vish eloquently put it. I still hope that refactoring
 our test architecture will let us reach the same level of quality with
 only a fraction of the tests being run at the gate, which should address
 most of the harm you see in adding additional repositories. But I agree
 there is little point in discussing splitting virt drivers (or anything
 else, really) until the internal interface below that potential split is
 fully cleaned up and it becomes an option.


 How about we start to try and patch gerrit to provide +2 permissions for
 people
 Who can be assigned Œdriver core¹ status. This is something that is
 relevant to Nova and Neutron and I guess Cinder too.


 If you think that's the right solution, I'd say go and investigate it
 with folks that understand enough gerrit internals to be able to figure
 out how hard it would be. Start a conversation in #openstack-infra to
 explore it.

 My expectation is that there is more complexity there than you give it
 credit for. That being said one of the biggest limitations we've had on
 gerrit changes is we've effectively only got one community member, Kai,
 who does any of that. If other people, or teams, were willing to dig in
 and own things like this, that might be really helpful.


 I don't think we need to modify gerrit to support this functionality. We can
 simply have a gerrit job (similar to the existing CI jobs) which is run on
 every patch set and checks if:
 1) the changes are only under /nova/virt/XYZ and /nova/tests/virt/XYZ
 2) it has two +1 from maintainers of driver XYZ

 if the above conditions are met, the job will post W+1 for this patchset.
 Does that make sense?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Bug 1368030] Re: nova-manage command when executed by non-root user, should give authorization error instead of low level database error

2014-09-11 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1368030

Title:
  nova-manage command when executed by non-root user, should give
  authorization error instead of low level database error

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368030/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Yahoo-eng-team] [Bug 1360650] Re: test_db_archive_deleted_rows failing in postgres jobs with ProgrammingError

2014-09-09 Thread Davanum Srinivas (DIMS)
Doesn't seem to be happening anymore...Let's reopen if necessary.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360650

Title:
  test_db_archive_deleted_rows failing in postgres jobs with
  ProgrammingError

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/99/106299/12/gate/gate-tempest-dsvm-neutron-
  pg-full/19a6b7d/console.html

  This is mostly in the neutron jobs:

  2014-08-23 17:15:04.731 | 
tempest.cli.simple_read_only.test_nova_manage.SimpleReadOnlyNovaManageTest.test_db_archive_deleted_rows
  2014-08-23 17:15:04.731 | 
---
  2014-08-23 17:15:04.731 | 
  2014-08-23 17:15:04.731 | Captured traceback:
  2014-08-23 17:15:04.732 | ~~~
  2014-08-23 17:15:04.732 | Traceback (most recent call last):
  2014-08-23 17:15:04.732 |   File 
tempest/cli/simple_read_only/test_nova_manage.py, line 84, in 
test_db_archive_deleted_rows
  2014-08-23 17:15:04.732 | self.nova_manage('db archive_deleted_rows 
50')
  2014-08-23 17:15:04.732 |   File tempest/cli/__init__.py, line 117, in 
nova_manage
  2014-08-23 17:15:04.732 | 'nova-manage', action, flags, params, 
fail_ok, merge_stderr)
  2014-08-23 17:15:04.733 |   File tempest/cli/__init__.py, line 53, in 
execute
  2014-08-23 17:15:04.733 | result_err)
  2014-08-23 17:15:04.733 | CommandFailed: Command 
'['/usr/local/bin/nova-manage', 'db', 'archive_deleted_rows', '50']' returned 
non-zero exit status 1.
  2014-08-23 17:15:04.733 | stdout:
  2014-08-23 17:15:04.733 | Command failed, please check log for more info
  2014-08-23 17:15:04.733 | 
  2014-08-23 17:15:04.734 | stderr:
  2014-08-23 17:15:04.734 | 2014-08-23 17:02:31.331 CRITICAL nova 
[req-414244fa-d6c7-4868-8b78-8fe40f119b52 None None] ProgrammingError: 
(ProgrammingError) column locked_by is of type shadow_instances0locked_by but 
expression is of type instances0locked_by
  2014-08-23 17:15:04.734 | LINE 1: ...ces.cell_name, instances.node, 
instances.deleted, instances
  2014-08-23 17:15:04.734 | 
 ^
  2014-08-23 17:15:04.734 | HINT:  You will need to rewrite or cast the 
expression.
  2014-08-23 17:15:04.735 |  'INSERT INTO shadow_instances SELECT 
instances.created_at, instances.updated_at, instances.deleted_at, instances.id, 
instances.internal_id, instances.user_id, instances.project_id, 
instances.image_ref, instances.kernel_id, instances.ramdisk_id, 
instances.launch_index, instances.key_name, instances.key_data, 
instances.power_state, instances.vm_state, instances.memory_mb, 
instances.vcpus, instances.hostname, instances.host, instances.user_data, 
instances.reservation_id, instances.scheduled_at, instances.launched_at, 
instances.terminated_at, instances.display_name, instances.display_description, 
instances.availability_zone, instances.locked, instances.os_type, 
instances.launched_on, instances.instance_type_id, instances.vm_mode, 
instances.uuid, instances.architecture, instances.root_device_name, 
instances.access_ip_v4, instances.access_ip_v6, instances.config_drive, 
instances.task_state, instances.default_ephemeral_device, 
instances.default_swap_device, 
 instances.progress, instances.auto_disk_config, instances.shutdown_terminate, 
instances.disable_terminate, instances.root_gb, instances.ephemeral_gb, 
instances.cell_name, instances.node, instances.deleted, instances.locked_by, 
instances.cleaned, instances.ephemeral_key_uuid \nFROM instances \nWHERE 
instances.deleted != %(deleted_1)s ORDER BY instances.id \n LIMIT %(param_1)s' 
{'param_1': 39, 'deleted_1': 0}
  2014-08-23 17:15:04.735 | 2014-08-23 17:02:31.331 22441 TRACE nova 
Traceback (most recent call last):
  2014-08-23 17:15:04.735 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
/usr/local/bin/nova-manage, line 10, in module
  2014-08-23 17:15:04.735 | 2014-08-23 17:02:31.331 22441 TRACE nova 
sys.exit(main())
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
/opt/stack/new/nova/nova/cmd/manage.py, line 1401, in main
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova 
ret = fn(*fn_args, **fn_kwargs)
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
/opt/stack/new/nova/nova/cmd/manage.py, line 920, in archive_deleted_rows
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova 
db.archive_deleted_rows(admin_context, max_rows)
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
/opt/stack/new/nova/nova/db/api.py, line 1959, in archive_deleted_rows
  2014-08-23 17:15:04.737 | 2014-08-23 

Re: [openstack-dev] [oslo] Anticipated Final Release Versions for Juno

2014-09-08 Thread Davanum Srinivas
LGTM Doug. for oslo.vmware - we need a fresh rev for use by Nova to
fix the following bug:
https://bugs.launchpad.net/nova/+bug/1341954

thanks,
dims

On Mon, Sep 8, 2014 at 2:32 PM, Doug Hellmann d...@doughellmann.com wrote:
 I spent some time today looking over our current set of libraries trying to 
 anticipate which will be ready for 1.0 (or later) and which are still 
 considered pre-release. I came up with this set of anticipated version 
 numbers for our final juno releases. Please let me know if you see any 
 surprises on the list.

 1.0 or later

 oslo.config - 1.4.0
 oslo.i18n - 1.0.0
 oslo.messaging - 1.4.0
 oslo.rootwrap - 1.3.0
 oslo.serialization - 1.0.0
 oslosphinx - 2.2.0
 oslotest - 1.1.0
 oslo.utils - 1.0.0
 cliff - 1.7.x
 stevedore - 1.0.0

 Alphas or Pre-releases (Trying for 1.0 for Kilo)

 oslo.concurrency -  1.0
 oslo.log - 0.1.0
 oslo.middleware - 0.1.0
 pbr -  1.0
 taskflow -  1.0

 Unknown

 oslo.db - I think we said 1.0.0 but I need to confirm.
 oslo.vmware - I’ll need to talk to the vmware team to see where things stand 
 with their release plans.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1255627] Re: images.test_list_image_filters.ListImageFiltersTest fails with timeout

2014-09-08 Thread Davanum Srinivas (DIMS)
seems to be happening only in check-tempest-dsvm-docker, so moving it
there

** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255627

Title:
  images.test_list_image_filters.ListImageFiltersTest fails with timeout

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New
Status in Tempest:
  Invalid

Bug description:
  Spurious failure in this test:

  http://logs.openstack.org/49/55749/8/check/check-tempest-devstack-vm-
  full/9bc94d5/console.html

  2013-11-27 01:10:35.802 | 
==
  2013-11-27 01:10:35.802 | FAIL: setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | 
--
  2013-11-27 01:10:35.803 | _StringException: Traceback (most recent call last):
  2013-11-27 01:10:35.804 |   File 
tempest/api/compute/images/test_list_image_filters.py, line 50, in setUpClass
  2013-11-27 01:10:35.807 | cls.client.wait_for_image_status(cls.image1_id, 
'ACTIVE')
  2013-11-27 01:10:35.809 |   File 
tempest/services/compute/xml/images_client.py, line 153, in 
wait_for_image_status
  2013-11-27 01:10:35.809 | raise exceptions.TimeoutException
  2013-11-27 01:10:35.809 | TimeoutException: Request timed out

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 925658] Re: RelaxNG schema is missing security_groups element for server create

2014-09-08 Thread Davanum Srinivas (DIMS)
Nova is in v2 API now and there are no RNG schemas for v2. So marking
this as won't fix.

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/925658

Title:
  RelaxNG schema is missing security_groups element for server create

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The schema simply doesn't include security_groups:
  nova/api/openstack/compute/schemas/v1.1/server.rng

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/925658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 757772] Re: Cloudpipe doesn't update the crl in userdata when a cert is revoked

2014-09-08 Thread Davanum Srinivas (DIMS)
looks like we don't have any cloudpipe code in Nova any more. marking
as invalid

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/757772

Title:
  Cloudpipe doesn't update the crl in userdata when a cert is revoked

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When a certificate is revoked (using nova-manage for example), a new
  payload with an updated crl is not generated.  This means that the
  cloudpipe vpn will need to be restarted to to pick up new information.

  The revoke should update the user data with a new payload.  That way
  the cloudpipe instance can periodically check for a new crl and update
  it.  This would allow revokation without needing to restart the vpn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/757772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188073] Re: havana: nova - Remove broken config_drive image_href support

2014-09-08 Thread Davanum Srinivas (DIMS)
Looks like the doc is no long present, so there's nothing to fix. yay!

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188073

Title:
  havana: nova - Remove broken config_drive image_href support

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If https://review.openstack.org/31314 it changes config drive support.
  
  image_href support has not been working since at least shortly before
  Folsom release.  This is a good indication that this functionality is not
  used.  As far as I can tell, the docs also do not match what was
  supported.  An image ID was required, but docs show examples with full
  hrefs.
  
  DocImpact
  
  http://docs.openstack.org/developer/nova/api_ext/ext_config_drive.html
  
  References to supporting image_hrefs should be removed.
  
  Docs also state that 'true' or 'True' (depending on if you use
  XML or JSON) or empty strings are returned for detail/show.  The
  nova code seems to actually return '1' or an empty string or None.
  
  This has been corrected in this patch to return 'True' (always
  capitalized) or '' if it is enabled/disabled respectively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 922751] Re: undocumented parameter --flagfile in nova-manage

2014-09-08 Thread Davanum Srinivas (DIMS)
We don't have --flagfile anywhere in the code or docs in nova trunk.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/922751

Title:
  undocumented parameter --flagfile in nova-manage

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  nova-manage --config-file /opt/stack/nova/bin/nova.conf is not working on my 
devstack environment (nova.common.cfg.ConfigFileParseError: Failed to parse 
bin/nova.conf: File contains no section headers.
  file: bin/nova.conf, line: 1)

  using the undocumented (not listed in the output of nova-manage
  --help) paramater --flagfile is working without problems. I can't find
  the definition of the parameter --flagfile. Looks like it's
  hardcoded in nova/utils.py in the method default_flagfile (there is a
  arg.find('flagfile')).

  stack@devstack001:~$ nova-manage --flagfile /opt/stack/nova/bin/nova.conf db 
version
  74

  stack@devstack001:~$ nova-manage --help | grep flagfile
--dhcpbridge_flagfile=DHCPBRIDGE_FLAGFILE
  location of flagfile for dhcpbridge

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/922751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240355] Re: Broken pipe error when copying image from glance to vSphere

2014-09-08 Thread Davanum Srinivas (DIMS)
from Jaroslav's last comment it seems like an issue with glance image-
create with --file and not Nova per se

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Confirmed = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240355

Title:
  Broken pipe error when copying image from glance to vSphere

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  Using the VMwareVCDriver on the latest nova (6affe67067) from master,
  launching an image for the first time is failing when copying image
  from Glance to vSphere. The error that shows in the nova log is:

  Traceback (most recent call last):
File /opt/stack/nova/nova/virt/vmwareapi/io_util.py, line 176, in _inner
  self.output.write(data)
File /opt/stack/nova/nova/virt/vmwareapi/read_write_util.py, line 143, in 
write
  self.file_handle.send(data)
File /usr/lib/python2.7/httplib.py, line 790, in send
  self.sock.sendall(data)
File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 
131, in sendall
  v = self.send(data[count:])
File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 
107, in send
  super(GreenSSLSocket, self).send, data, flags)
File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 
77, in 
  return func(*a, **kw)
File /usr/lib/python2.7/ssl.py, line 198, in send
  v = self._sslobj.write(data)
  error: [Errno 32] Broken pipe

  To reproduce, launch an instance using an image that has not yet been
  uploaded to vSphere. I have attached the full log here:
  http://paste.openstack.org/show/48536/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1240355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1007027] Re: Unnecessary SELECT 1 statements spamming MySQL server

2014-09-07 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1007027

Title:
  Unnecessary SELECT 1 statements spamming MySQL server

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  New

Bug description:
  Nova is issuing a lot of unnecessary SELECT 1 queries against its
  MySQL database. I believe this is due to the MySQLPingListener()
  function in nova/db/sqlalchemy/session.py.

  Even though SELECT 1 is a light-weight database call, it is
  unnecessary and wasting resources (database CPU cycles) and time
  (extra network round trips), and is not guaranteeing a valid database
  connection. In other words, it is just a waste of resources.

  See http://www.mysqlperformanceblog.com/2010/05/05/checking-for-a
  -live-database-connection-considered-harmful/ for a longer discussion
  of why this should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1007027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231224] Re: No handlers could be found for logger 106_add_foreign_keys

2014-09-07 Thread Davanum Srinivas (DIMS)
We don't have a 106_add_foreign_keys anymore. :)

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1231224

Title:
  No handlers could be found for logger 106_add_foreign_keys

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I tried to run nova-manage db sync with mysql cluster, encounting
  the following errors, but the (most) tables were created.  the mysql
  process without cluster is ok!

  If I dump a normal nova database from mysql(without cluster), and then 
  mysql asdfsdfasdf  nova.sql,to write data to mysql cluster, this worked OK. 
A bug on sqlalchemy ?

  Software involved:
  nova-2012.2.5 (nova folsom)
  mysql cluster (mysql-5.6.11 ndb-7.3.2)
  MySQL_python-1.2.3
  sqlalchemy_migrate-0.7.2
  SQLAlchemy-0.7.9


  
  log:

  No handlers could be found for logger 106_add_foreign_keys
  2013-09-26 02:09:23 17100 CRITICAL nova [-] (IntegrityError) (1215, 'Cannot 
add foreign key constraint') 'ALTER TABLE security_group_instance_association 
ADD CONSTRAINT security_group_instance_association_instance_uuid_fkey FOREIGN 
KEY(instance_uuid) REFERENCES instances (uuid)' ()
  2013-09-26 02:09:23 17100 TRACE nova Traceback (most recent call last):
  2013-09-26 02:09:23 17100 TRACE nova   File /usr/local/bin/nova-manage, 
line 5, in module
  2013-09-26 02:09:23 17100 TRACE nova 
pkg_resources.run_script('nova==2012.2.5', 'nova-manage')
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py,
 line 499, in run_script
  2013-09-26 02:09:23 17100 TRACE nova 
self.require(requires)[0].run_script(script_name, ns)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py,
 line 1239, in run_script
  2013-09-26 02:09:23 17100 TRACE nova execfile(script_filename, namespace, 
namespace)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/EGG-INFO/scripts/nova-manage,
 line 1404, in module
  2013-09-26 02:09:23 17100 TRACE nova main()
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/EGG-INFO/scripts/nova-manage,
 line 1391, in main
  2013-09-26 02:09:23 17100 TRACE nova fn(*fn_args, **fn_kwargs)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/EGG-INFO/scripts/nova-manage,
 line 760, in sync
  2013-09-26 02:09:23 17100 TRACE nova return migration.db_sync(version)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/db/migration.py,
 line 32, in db_sync
  2013-09-26 02:09:23 17100 TRACE nova return IMPL.db_sync(version=version)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/db/sqlalchemy/migration.py,
 line 79, in db_sync
  2013-09-26 02:09:23 17100 TRACE nova return 
versioning_api.upgrade(get_engine(), repository, version)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy_migrate-0.7.2-py2.7.egg/migrate/versioning/api.py,
 line 186, in upgrade
  2013-09-26 02:09:23 17100 TRACE nova return _migrate(url, repository, 
version, upgrade=True, err=err, **opts)
  2013-09-26 02:09:23 17100 TRACE nova   File string, line 2, in _migrate
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/db/sqlalchemy/migration.py,
 line 44, in patched_with_engine
  2013-09-26 02:09:23 17100 TRACE nova return f(*a, **kw)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy_migrate-0.7.2-py2.7.egg/migrate/versioning/api.py,
 line 366, in _migrate
  2013-09-26 02:09:23 17100 TRACE nova schema.runchange(ver, change, 
changeset.step)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy_migrate-0.7.2-py2.7.egg/migrate/versioning/schema.py,
 line 91, in runchange
  2013-09-26 02:09:23 17100 TRACE nova change.run(self.engine, step)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy_migrate-0.7.2-py2.7.egg/migrate/versioning/script/py.py,
 line 145, in run
  2013-09-26 02:09:23 17100 TRACE nova script_func(engine)
  2013-09-26 02:09:23 17100 TRACE nova   File 
/usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/db/sqlalchemy/migrate_repo/versions/106_add_foreign_keys.py,
 line 43, in upgrade
  2013-09-26 02:09:23 17100 TRACE nova 
refcolumns=[instances.c.uuid]).create()
  2013-09-26 02:09:23 17100 TRACE nova   File 

[Yahoo-eng-team] [Bug 1007038] Re: Nova is issuing unnecessary ROLLBACK statements to MySQL

2014-09-07 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1007038

Title:
  Nova is issuing unnecessary ROLLBACK statements to MySQL

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  New

Bug description:
  I'm not sure exactly where this is coming from yet, but Nova is
  issuing a ROLLBACK to MySQL after nearly every SELECT statement,
  even though I think the connection should be autocommit mode. This is
  unnecessary and wastes time (network roundtrip) and resources
  (database CPU cycles).

  I suspect this is getting generated by sqlalchemy when ever a
  connection is handed back to the pool, since the number of rollbacks
  roughly coincides with the number of select 1 statements that I see
  in the logs. Those are issued by the MySQLPingListener when a
  connection is taken out of the pool.

  I opened a bug already for the unnecessary select 1 statements, but
  I'm opening this as a separate bug. If someone finds a way to fix both
  at once, that'd be great.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1007038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1040694] Re: in-progress queries are intolerant of mysql restarts

2014-09-07 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1040694

Title:
  in-progress queries are intolerant of mysql restarts

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Oslo Database library:
  New

Bug description:
  Glance is intolerant of mysql restarts with running queries and the currently 
ping_listener/wrap_db_error mechanism does not work for running queries.  This 
was logged in Bug #954971 and a solution (using wrap_db for queries)  merged.  
The  fix was  removed in Bug #967887 as it did not work with later versions of 
sqlalchemy.  
  An alternative approach is to use the sqlalchemy  create_engine module 
parameter to override the engine behaviour and do the wrapping there.   (Mike 
Bayer of sqlalchemys recommended solution)   I will attach a patch which  does 
this

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1040694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363955] Re: Broken links in doc/source/devref/filter_scheduler.rst

2014-09-06 Thread Davanum Srinivas (DIMS)
looks like this is already fixed.

** Changed in: nova
   Status: In Progress = Won't Fix

** Changed in: nova
   Status: Won't Fix = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363955

Title:
  Broken links in doc/source/devref/filter_scheduler.rst

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  When you browse directly to
  http://docs.openstack.org/developer/nova/devref/filter_scheduler.html,
  there were broken links on the following class
  'AggregateNumInstancesFilter', 'RamWeigher'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo] instance lock and class lock

2014-09-05 Thread Davanum Srinivas
given the code size, a BP may be a over stretch. I'd just file a review + bug

-- dims

On Fri, Sep 5, 2014 at 1:29 AM, Zang MingJie zealot0...@gmail.com wrote:
 does it require bp or bug report to submit oslo.concurrency patch ?


 On Wed, Sep 3, 2014 at 7:15 PM, Davanum Srinivas dava...@gmail.com wrote:

 Zang MingJie,

 Can you please consider submitting a review against oslo.concurrency?


 http://git.openstack.org/cgit/openstack/oslo.concurrency/tree/oslo/concurrency

 That will help everyone who will adopt/use that library.

 thanks,
 dims

 On Wed, Sep 3, 2014 at 1:45 AM, Zang MingJie zealot0...@gmail.com wrote:
  Hi all:
 
  currently oslo provides lock utility, but unlike other languages, it is
  class lock, which prevent all instances call the function. IMO, oslo
  should
  provide an instance lock, only lock current instance to gain better
  concurrency.
 
  I have written a lock in a patch[1], please consider pick it into oslo
 
  [1]
 
  https://review.openstack.org/#/c/114154/4/neutron/openstack/common/lockutils.py
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1274523] Re: connection_trace does not work with DB2 backend

2014-09-05 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in Orchestration API (Heat):
  New
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  In Progress
Status in The Oslo library incubator:
  Fix Released
Status in Oslo Database library:
  New

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services.binary AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1274523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364685] Re: VMware: Broken pipe ERROR when boot VM

2014-09-05 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364685

Title:
  VMware: Broken pipe ERROR when boot VM

Status in OpenStack Compute (Nova):
  New
Status in The OpenStack VMwareAPI subTeam:
  New
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  This error happens intermittently, but always can be reproduced after
  long run and have multiple vmware computer connect to the same vCenter
  in our test environment:

  2014-09-02 09:34:53.489 9439 ERROR nova.virt.vmwareapi.io_util [-] [Errno 32] 
Broken pipe
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util Traceback 
(most recent call last):
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/io_util.py, line 178, in 
_inner
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util 
self.output.write(data)
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/read_write_util.py, line 
138, in write
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util 
self.file_handle.send(data)
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib64/python2.6/httplib.py, line 759, in send
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util 
self.sock.sendall(str)
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 131, in sendall
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util v = 
self.send(data[count:])
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 107, in send
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util 
super(GreenSSLSocket, self).send, data, flags)
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 77, in 
_call_trampolining
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util return 
func(*a, **kw)
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util   File 
/usr/lib64/python2.6/ssl.py, line 174, in send
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util v = 
self._sslobj.write(data)
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util error: [Errno 
32] Broken pipe
  2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util 

  We are using the 'VMware vCenter Server Appliance', version is 5.5.0.

  Normally, there are about 2000+ connection in TIME_WAIT status on port
  443 when this error happens, and have 80 session in idle in our test
  env.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306390] Re: os-fping API is not available by default in Ubuntu

2014-09-05 Thread Davanum Srinivas (DIMS)
per comment from Joe in review - we should mark this as won't fix.

It looks like red hat defaults to /usr/sbin/fping so this is a valid
default, and since this a config option ubunutu users will just have to
set the value in nova.conf.

Changing the default like this will break any user who is using fping
fron /usr/bin/fping

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306390

Title:
  os-fping API is not available by default in Ubuntu

Status in devstack - openstack dev environments:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  fping utility was introduced by 
https://bugs.launchpad.net/devstack/+bug/1287468 and 
https://review.openstack.org/77749.
  But we can't use it yet in Ubuntu. Because the Nova's default fping_path is 
'/usr/sbin/fping' but the path of Ubuntu is '/usr/bin/fping'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1306390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo] library feature freeze and final releases

2014-09-02 Thread Davanum Srinivas
Doug,

plan is good. Same criteria will mean exception for oslo.log as well

-- dims

On Tue, Sep 2, 2014 at 9:20 AM, Doug Hellmann d...@doughellmann.com wrote:
 Oslo team,

 We need to consider how we are going to handle the approaching feature freeze 
 deadline (4 Sept). We should, at this point, be focusing reviews on changes 
 associated with blueprints. We will have time to finish graduation work and 
 handle bugs between the freeze and the release candidate deadline, but 
 obviously it’s OK to review those now, too.

 I propose that we apply the feature freeze rules to the incubator and any 
 library that has had a release this cycle and is being used by any other 
 project, but that libraries still being graduated not be frozen. I think that 
 gives us exceptions for oslo.concurrency, oslo.serialization, and 
 oslo.middleware. All of the other libraries should be planning to freeze new 
 feature work this week.

 The app RC1 period starts 25 Sept, so we should be prepared to tag our final 
 releases of libraries before then to ensure those final releases don’t 
 introduce issues into the apps when they are released. We will apply 1.0 tags 
 to the same commits that have the last alpha in the release series for each 
 library, and then focus on fixing any bugs that come up during the release 
 candidate period. I propose that we tag our releases on 18 Sept, to give us a 
 few days to fix any issues that arise before the RC period starts.

 Please let me know if you spot any issues with this plan.

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1173060] Re: RFE: Missing parameter --sql_connection for nova-manage

2014-08-25 Thread Davanum Srinivas (DIMS)
both cinder-manage and glance-manage no longer support this parameter.
Closing as wont-fix

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1173060

Title:
   RFE: Missing parameter --sql_connection for nova-manage

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Command nova-manage is missing --sql_connection parameter as
  (cinder|glance)-manage has. This is useful when you need to db sync,
  but you don't want to modify config file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1173060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-08-22 Thread Davanum Srinivas
Sounds like a good plan going forward @ttx.

On Fri, Aug 22, 2014 at 5:59 AM, Thierry Carrez thie...@openstack.org wrote:
 TL;DR:
 Let's create an Oslo projectgroup in Launchpad to track work across all
 Oslo libraries. In library projects, let's use milestones connected to
 published versions rather than the common milestones.

 Long version:
 As we graduate more Oslo libraries (which is awesome), tracking Oslo
 work in Launchpad (bugs, milestones...) has become more difficult.

 There used to be only one Launchpad project (oslo, which covered the
 oslo incubator). It would loosely follow the integrated milestones
 (juno-1, juno-2...), get blueprints and bugs targeted to those, get tags
 pushed around those development milestones: same as the integrated
 projects, just with no source code tarball uploads.

 When oslo.messaging graduated, a specific Launchpad project was created
 to track work around it. It still had integrated development milestones
 -- only at the end it would publish a 1.4.0 release instead of a 2014.2
 one. That approach creates two problems. First, it's difficult to keep
 track of oslo work since it now spans two projects. Second, the
 release management logic of marking bugs Fix released at development
 milestones doesn't really apply (bugs should rather be marked released
 when a published version of the lib carries the fix). Git tags and
 Launchpad milestones no longer align, which creates a lot of confusion.

 Then as more libraries appeared, some of them piggybacked on the general
 oslo Launchpad project (generally adding tags to point to the specific
 library), and some others created their own project. More confusion ensues.

 Here is a proposal that we could use to solve that, until StoryBoard
 gets proper milestone support and Oslo is just migrated to it:

 1. Ask for an oslo project group in Launchpad

 This would solve the first issue, by allowing to see all oslo work on
 single pages (see examples at [1] or [2]). The trade-off here is that
 Launchpad projects can't be part of multiple project groups (and project
 groups can't belong to project groups). That means that Oslo projects
 will no longer be in the openstack Launchpad projectgroup. I think the
 benefits outweigh the drawbacks here: the openstack projectgroup is not
 very strict anyway so I don't think it's used in people workflows that much.

 2. Create one project per library, adopt tag-based milestones

 Each graduated library should get its own project (part of the oslo
 projectgroup). It should use the same series/cycles as everyone else
 (juno), but it should have milestones that match the alpha release
 tags, so that you can target work to it and mark them fix released
 when that means the fix is released. That would solve the issue of
 misaligned tags/milestones. The trade-off here is that you lose the
 common milestone rhythm (although I guess you can still align some
 alphas to the common development milestones). That sounds a small price
 to pay to better communicate which version has which fix.

 3. Rename oslo project to oslo-incubator

 Keep the Launchpad oslo project as-is, part of the same projectgroup,
 to cover oslo-incubator work. This can keep the common development
 milestones, since the incubator doesn't do releases anyway. However,
 it has to be renamed to oslo-incubator so that it doesn't conflict
 with the projectgroup namespace. Once it no longer contains graduated
 libs, that name makes much more sense anyway.


 This plan requires Launchpad admin assistance to create a projectgroup
 and rename a project, so I'd like to get approval on it before moving to
 ask them for help.

 Comments, thoughts ?

 [1] https://blueprints.launchpad.net/openstack
 [2] https://bugs.launchpad.net/openstack

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1341954] Re: suds client subject to cache poisoning by local attacker

2014-08-22 Thread Davanum Srinivas (DIMS)
Fixed in oslo.vmware - Ieec9d99aa674adf5cbc9be924fef3856cf4e5d66

** Changed in: oslo.vmware
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

** Changed in: oslo.vmware
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341954

Title:
  suds client subject to cache poisoning by local attacker

Status in Cinder:
  In Progress
Status in Gantt:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo VMware library for OpenStack projects:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  
  The suds project appears to be largely unmaintained upstream. The default 
cache implementation stores pickled objects to a predictable path in /tmp. This 
can be used by a local attacker to redirect SOAP requests via symlinks or run a 
privilege escalation / code execution attack via a pickle exploit. 

  cinder/requirements.txt:suds=0.4
  gantt/requirements.txt:suds=0.4
  nova/requirements.txt:suds=0.4
  oslo.vmware/requirements.txt:suds=0.4

  
  The details are available here - 
  https://bugzilla.redhat.com/show_bug.cgi?id=978696
  (CVE-2013-2217)

  Although this is an unlikely attack vector steps should be taken to
  prevent this behaviour. Potential ways to fix this are by explicitly
  setting the cache location to a directory created via
  tempfile.mkdtemp(), disabling cache client.set_options(cache=None), or
  using a custom cache implementation that doesn't load / store pickled
  objects from an insecure location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1341954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-20 Thread Davanum Srinivas
Ben, +1 to the plan you outlined.

-- dims

On Wed, Aug 20, 2014 at 4:13 PM, Ben Nemec openst...@nemebean.com wrote:
 On 08/20/2014 01:03 PM, Vishvananda Ishaya wrote:
 This may be slightly off-topic but it is worth mentioning that the use of 
 threading.Lock[1]
 which was included to make the locks thread safe seems to be leading to a 
 deadlock in eventlet[2].
 It seems like we have rewritten this too many times in order to fix minor 
 pain points and are
 adding risk to a very important component of the system.

 [1] https://review.openstack.org/#/c/54581
 [2] https://bugs.launchpad.net/nova/+bug/1349452

 This is pretty much why I'm pushing to just revert to the file locking
 behavior we had up until a couple of months ago, rather than
 implementing some new shiny lock thing that will probably cause more
 subtle issues in the consuming projects.  It's become clear to me that
 lockutils is too deeply embedded in the other projects, and there are
 too many implementation details that they rely on, to make significant
 changes to its default code path.


 On Aug 18, 2014, at 2:05 PM, Pádraig Brady p...@draigbrady.com wrote:

 On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:

 Hi Yuriy,

 […]

 Looking forward to your opinions.

 This looks like a good summary of the situation.

 I've added a solution E based on pthread, but didn't get very far about
 it for now.

 In my experience I would just go with the fcntl locks.
 They're auto unlocked and well supported, and importantly,
 supported for distributed processes.

 I'm not sure how problematic the lock_path config is TBH.
 That is adjusted automatically in certain cases where needed anyway.

 Pádraig.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1359092] Re: During snapshot create using nova CLI, unable to use optional arguments

2014-08-20 Thread Davanum Srinivas (DIMS)
I can't seem to run nova --all flavor-list or nova --extra-specs
flavor-list either

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = High

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
   Status: New = Confirmed

** Changed in: python-novaclient
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359092

Title:
  During snapshot create using nova CLI, unable to use optional
  arguments

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Nova:
  Confirmed

Bug description:
  I am creating a VM snapshot using nova CLI -

  Set of operations and error are as below -

  
  1. nova cli help ===

  [raies@localhost cli_services]$ nova help image-create
  usage: nova image-create [--show] [--poll] server name

  Create a new image by taking a snapshot of a running server.

  Positional arguments:
server  Name or ID of server.
nameName of snapshot.

  Optional arguments:
--showPrint image info.
--pollReport the snapshot progress and poll until image creation is
  complete.
  [raies@localhost cli_services]$


  2. 
  [raies@localhost cli_services]$ nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | 238eb266-5837-4544-a626-287cb3ca98c3 | test-server | ACTIVE | -  | 
Running | public=172.24.4.3 |
  
+--+-+++-+---+

  3. 
  when used optional argument (--poll) - 

  [raies@localhost cli_services]$ nova --poll image-create 
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot
  usage: nova [--version] [--debug] [--os-cache] [--timings]
  [--timeout seconds] [--os-auth-token OS_AUTH_TOKEN]
  [--os-username auth-user-name] [--os-user-id auth-user-id]
  [--os-password auth-password]
  [--os-tenant-name auth-tenant-name]
  [--os-tenant-id auth-tenant-id] [--os-auth-url auth-url]
  [--os-region-name region-name] [--os-auth-system auth-system]
  [--service-type service-type] [--service-name service-name]
  [--volume-service-name volume-service-name]
  [--endpoint-type endpoint-type]
  [--os-compute-api-version compute-api-ver]
  [--os-cacert ca-certificate] [--insecure]
  [--bypass-url bypass-url]
  subcommand ...
  error: unrecognized arguments: --poll

  
  4. 
  when used optional argument --show - 

  
  [raies@localhost cli_services]$ nova --show image-create 
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot
  usage: nova [--version] [--debug] [--os-cache] [--timings]
  [--timeout seconds] [--os-auth-token OS_AUTH_TOKEN]
  [--os-username auth-user-name] [--os-user-id auth-user-id]
  [--os-password auth-password]
  [--os-tenant-name auth-tenant-name]
  [--os-tenant-id auth-tenant-id] [--os-auth-url auth-url]
  [--os-region-name region-name] [--os-auth-system auth-system]
  [--service-type service-type] [--service-name service-name]
  [--volume-service-name volume-service-name]
  [--endpoint-type endpoint-type]
  [--os-compute-api-version compute-api-ver]
  [--os-cacert ca-certificate] [--insecure]
  [--bypass-url bypass-url]
  subcommand ...
  error: unrecognized arguments: --show
  Try 'nova help ' for more information.

  5. 
  When tried with no optional argument - 

  [raies@localhost cli_services]$ nova image-create
  238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot

  [raies@localhost cli_services]$ 
  [raies@localhost cli_services]$ 
  [raies@localhost cli_services]$ 
  [raies@localhost cli_services]$ nova image-list
  
+--+-++--+
  | ID   | Name| 
Status | Server   |
  
+--+-++--+
  | 5f785211-590c-48aa-b64c-c9a6a9b60aad | Fedora-x86_64-20-20140618-sda   | 
ACTIVE |  |
  | 4c5838f1-fce5-422d-a411-401d79f6250a | cirros-0.3.2-x86_64-uec | 
ACTIVE |  |
  | 

[Yahoo-eng-team] [Bug 1330065] Re: VMWare - Driver does not ignore Datastore in maintenance mode

2014-08-19 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330065

Title:
  VMWare - Driver does not ignore Datastore in maintenance mode

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  New
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  A datastore can be in maintenance mode. The driver does not ignore it
  both in stats update and while spawing instances.

  During stats update, a wrong stats update is returned if a datastore
  is in maintenance mode.

  Also during spawing, if a datastore in maintenance mode gets choosen,
  since it had the largest disk space, the spawn would fail.

  The driver should ignore datastore in maintenance mode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309753] Re: VMware: datastore_regex not used while sending disk stats

2014-08-19 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309753

Title:
  VMware: datastore_regex not used while sending disk stats

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  
  VMware VCDriver uses datastore_regex to match datastores (disk abstraction) 
associated with a compute host which can be used for provisioning instances. 
But it does not use datastore_regex while reporting disk stats. As a result, 
when this option is enabled, resource tacker may see different disk usage than 
what's computed while spawning the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340564] Re: Very bad performance of concurrent spawning VMs to VCenter

2014-08-18 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340564

Title:
  Very bad performance of concurrent spawning VMs to VCenter

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  When 10 user starts to provision VMs to a VCenter, OpenStack chooses one same 
datastore for everyone.
  After the first clone task is complete, OpenStack recognizes that datastore 
space usage is increased, and will choose another datastore. However, all the 
next 9 provision tasks are still performed on the same datastore. If no 
provision task on one datastore completes, OpenStack will persist to choose 
that datastore to spawn next VMs.

  This bug has significant performance impact, because it slows down
  performance of all the provisioning tasks greatly. VCenter driver
  should choose a not busy datastore for the provisioning tasks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo][db] Nominating Mike Bayer for the oslo.db core reviewers team

2014-08-15 Thread Davanum Srinivas
+1 from me!!

On Fri, Aug 15, 2014 at 4:30 PM, Morgan Fainberg
morgan.fainb...@gmail.com wrote:

 -Original Message-
 From: Doug Hellmann d...@doughellmann.com
 Reply: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: August 15, 2014 at 13:29:15
 To: Ben Nemec openst...@nemebean.com, OpenStack Development Mailing List 
 (not for usage questions) openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [oslo][db] Nominating Mike Bayer for the 
 oslo.db core reviewers team


 On Aug 15, 2014, at 10:00 AM, Ben Nemec wrote:

  On 08/15/2014 08:20 AM, Russell Bryant wrote:
  On 08/15/2014 09:13 AM, Jay Pipes wrote:
  On 08/15/2014 04:21 AM, Roman Podoliaka wrote:
  Hi Oslo team,
 
  I propose that we add Mike Bayer (zzzeek) to the oslo.db core
  reviewers team.
 
  Mike is an author of SQLAlchemy, Alembic, Mako Templates and some
  other stuff we use in OpenStack. Mike has been working on OpenStack
  for a few months contributing a lot of good patches and code reviews
  to oslo.db [1]. He has also been revising the db patterns in our
  projects and prepared a plan how to solve some of the problems we have
  [2].
 
  I think, Mike would be a good addition to the team.
 
  Uhm, yeah... +10 :)
 
  ^2 :-)
 
 
  What took us so long to do this? :-)
 
  +1 obviously.

 I did think it would be a good idea to wait a *little* while and make sure 
 we weren’t going
 to scare him off. ;-)

 Seriously, Mike has been doing a great job collaborating with the existing 
 team and helping
 us make oslo.db sane.

 +1

 Doug


 Big +1 from me. Mike has been great across the board (I know I’m not oslo 
 core)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1356955] [NEW] intermittent failure in test_security_group_get_no_instances (gate-nova-python27 ONLY)

2014-08-14 Thread Davanum Srinivas (DIMS)
Public bug reported:

logstash query: Unexpected method call
get_session.__call__(use_slave=False)

18 hits in last 7 days.

Traceback (most recent call last):
  File nova/tests/db/test_db_api.py, line 1236, in 
test_security_group_get_no_instances
security_group = db.security_group_get(self.ctxt, sid)
  File nova/db/api.py, line 1269, in security_group_get
columns_to_join)
  File nova/db/sqlalchemy/api.py, line 167, in wrapper
return f(*args, **kwargs)
  File nova/db/sqlalchemy/api.py, line 3663, in security_group_get
query = _security_group_get_query(context, project_only=True).\
  File nova/db/sqlalchemy/api.py, line 3630, in _security_group_get_query
read_deleted=read_deleted, project_only=project_only)
  File nova/db/sqlalchemy/api.py, line 237, in model_query
session = kwargs.get('session') or get_session(use_slave=use_slave)
  File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 765, in __call__
return mock_method(*params, **named_params)
  File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1002, in __call__
expected_method = self._VerifyMethodCall()
  File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1049, in _VerifyMethodCall
expected = self._PopNextMethod()
  File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1035, in _PopNextMethod
raise UnexpectedMethodCallError(self, None)
UnexpectedMethodCallError: Unexpected method call 
get_session.__call__(use_slave=False) - None

** Affects: nova
 Importance: Undecided
 Status: New

** Project changed: barbican = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356955

Title:
  intermittent failure in test_security_group_get_no_instances (gate-
  nova-python27 ONLY)

Status in OpenStack Compute (Nova):
  New

Bug description:
  logstash query: Unexpected method call
  get_session.__call__(use_slave=False)

  18 hits in last 7 days.

  Traceback (most recent call last):
File nova/tests/db/test_db_api.py, line 1236, in 
test_security_group_get_no_instances
  security_group = db.security_group_get(self.ctxt, sid)
File nova/db/api.py, line 1269, in security_group_get
  columns_to_join)
File nova/db/sqlalchemy/api.py, line 167, in wrapper
  return f(*args, **kwargs)
File nova/db/sqlalchemy/api.py, line 3663, in security_group_get
  query = _security_group_get_query(context, project_only=True).\
File nova/db/sqlalchemy/api.py, line 3630, in _security_group_get_query
  read_deleted=read_deleted, project_only=project_only)
File nova/db/sqlalchemy/api.py, line 237, in model_query
  session = kwargs.get('session') or get_session(use_slave=use_slave)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 765, in __call__
  return mock_method(*params, **named_params)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1002, in __call__
  expected_method = self._VerifyMethodCall()
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1049, in _VerifyMethodCall
  expected = self._PopNextMethod()
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1035, in _PopNextMethod
  raise UnexpectedMethodCallError(self, None)
  UnexpectedMethodCallError: Unexpected method call 
get_session.__call__(use_slave=False) - None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325670] Re: Flavor's disk is too small for requested image.

2014-08-13 Thread Davanum Srinivas (DIMS)
*** This bug is a duplicate of bug 1330856 ***
https://bugs.launchpad.net/bugs/1330856

** This bug has been marked a duplicate of bug 1330856
   Confusing fault reasion when the flavors disk size was too small

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1325670

Title:
  Flavor's disk is too small for requested image.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  I uploaded an image to glance and booted it. Once it landed on a
  compute node it tried to download the image to the compute node over
  and over and fails with this exception and never goes to error state.

  2014-06-02 10:53:19.651 ERROR nova.compute.manager 
[req-83d5f514-5650-4fd2-8c01-1680428b7419 demo demo] [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] Instance failed to spawn
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] Traceback (most recent call last):
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/compute/manager.py, line 2059, in _build_resources
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] yield resources
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/compute/manager.py, line 1962, in _build_and_run_instan
  ce
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] block_device_info=block_device_info)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2281, in spawn
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] admin_pass=admin_password)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2655, in _create_image
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] project_id=instance['project_id'])
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 192, in cache
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] *args, **kwargs)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 384, in create_image
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] prepare_template(target=base, 
max_size=size, *args, **kwargs)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 249, in inner
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] return f(*args, **kwargs)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 182, in fetch_func_s
  ync
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] fetch_func(target=target, *args, 
**kwargs)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/libvirt/utils.py, line 660, in fetch_image
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] max_size=max_size)
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]   File 
/opt/stack/nova/nova/virt/images.py, line 110, in fetch_to_raw
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] raise exception.FlavorDiskTooSmall()
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a] FlavorDiskTooSmall: Flavor's disk is too 
small for requested image.
  2014-06-02 10:53:19.651 TRACE nova.compute.manager [instance: 
4cb1ec29-94d0-4ad2-8f64-eaea80dcd12a]

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1325670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 994005] Re: nova --insecure image-delete fails

2014-08-13 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/994005

Title:
  nova --insecure image-delete fails

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When issuing:

  $ nova --debug --insecure image-delete tty-linux-kernel
  connect: (109.234.208.72, 5000)
  send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 
109.234.208.72:5000\r\nContent-Length: 120\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n{auth: 
{tenantName: ad...@bvox.net, passwordCredentials: {username: 
ad...@bvox.net, password: ***}}}'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: Content-Type: application/json
  header: Vary: X-Auth-Token
  header: Content-Length: 2619
  header: Date: Thu, 03 May 2012 14:02:54 GMT
  connect: (109.234.208.72, 444)
  send: u'GET /v2/7bef76f5941341fcb133bfc04f156810/images/detail 
HTTP/1.1\r\nHost: 109.234.208.72:444\r\nx-auth-project-id: 
ad...@bvox.net\r\nx-auth-token: 
0e73e6f23b72443f8066ece54610ed1f\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-12ccf4e9-f657-4db5-9fba-92ce985f1e82
  header: Content-Type: application/json
  header: Content-Length: 11826
  header: Date: Thu, 03 May 2012 14:02:55 GMT
  send: u'DELETE 
/v2/7bef76f5941341fcb133bfc04f156810/images/30130507-4c7a-43d6-8d47-0ac24133bf3b
 HTTP/1.1\r\nHost: 109.234.208.72:444\r\nx-auth-project-id: 
ad...@bvox.net\r\nx-auth-token: 
0e73e6f23b72443f8066ece54610ed1f\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.0 501 Not Implemented\r\n'
  header: Content-Type: text/html
  header: Content-Length: 28
  header: Expires: now
  header: Pragma: no-cache
  header: Cache-control: no-cache,no-store
  DEBUG (shell:416) n/a (HTTP 501)
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 413, in 
main
  OpenStackComputeShell().main(sys.argv[1:])
File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 364, in 
main
  args.func(self.cs, args)
File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 444, 
in do_image_delete
  image.delete()
File /usr/lib/python2.7/dist-packages/novaclient/v1_1/images.py, line 22, 
in delete
  self.manager.delete(self)
File /usr/lib/python2.7/dist-packages/novaclient/v1_1/images.py, line 60, 
in delete
  self._delete(/images/%s % base.getid(image))
File /usr/lib/python2.7/dist-packages/novaclient/base.py, line 166, in 
_delete
  resp, body = self.api.client.delete(url)
File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 145, in 
delete
  return self._cs_request(url, 'DELETE', **kwargs)
File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 124, in 
_cs_request
  **kwargs)
File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 107, in 
request
  raise exceptions.from_response(resp, body)
  HTTPNotImplemented: n/a (HTTP 501)
  ERROR: n/a (HTTP 501)

  If we use the internalURL endpoint the comand works as expected,. Our
  glance SSL endpoint currently uses an invalid certificate,  so the
  --insecure flag should be also used to ignore the invalid SSL
  certificate when connecting to glance URL and not only to keystone.

  $ nova --debug --insecure --endpoint_type internalURL  image-delete 
tty-linux-kernel
  connect: (109.234.208.72, 5000)
  send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 
109.234.208.72:5000\r\nContent-Length: 120\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n{auth: 
{tenantName: ad...@bvox.net, passwordCredentials: {username: 
ad...@bvox.net, password: Klock09}}}'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: Content-Type: application/json
  header: Vary: X-Auth-Token
  header: Content-Length: 2619
  header: Date: Thu, 03 May 2012 14:08:45 GMT
  connect: (10.130.0.16, 8774)
  send: u'GET /v2/7bef76f5941341fcb133bfc04f156810/images/detail 
HTTP/1.1\r\nHost: 10.130.0.16:8774\r\nx-auth-project-id: 
ad...@bvox.net\r\nx-auth-token: 
038914aaa66e43259ec52ec391051b41\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-95a86170-6907-4f3a-a972-5803873e989e
  header: Content-Type: application/json
  header: Content-Length: 11758
  header: Date: Thu, 03 May 2012 14:08:45 GMT
  send: u'DELETE 
/v2/7bef76f5941341fcb133bfc04f156810/images/30130507-4c7a-43d6-8d47-0ac24133bf3b
 HTTP/1.1\r\nHost: 10.130.0.16:8774\r\nx-auth-project-id: 
ad...@bvox.net\r\nx-auth-token: 

[Yahoo-eng-team] [Bug 1356003] Re: Tempest test failure for FloatingIPDetailsTestJSON.test_list_floating_ip_pools

2014-08-13 Thread Davanum Srinivas (DIMS)
** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356003

Title:
  Tempest test failure for
  FloatingIPDetailsTestJSON.test_list_floating_ip_pools

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  http://logs.openstack.org/39/94439/17/check/check-tempest-dsvm-full-
  icehouse/54f9d43

  Traceback (most recent call last):
File tempest/test.py, line 128, in wrapper
  return f(self, *func_args, **func_kwargs)
File tempest/api/compute/floating_ips/test_list_floating_ips.py, line 78, 
in test_list_floating_ip_pools
  resp, floating_ip_pools = self.client.list_floating_ip_pools()
File tempest/services/compute/json/floating_ips_client.py, line 113, in 
list_floating_ip_pools
  self.validate_response(schema.floating_ip_pools, resp, body)
File tempest/common/rest_client.py, line 578, in validate_response
  raise exceptions.InvalidHTTPResponseBody(msg)
  InvalidHTTPResponseBody: HTTP response body is invalid json or xml
  Details: HTTP response body is invalid ({u'name': u'public'} is not of type 
'string'

  Failed validating 'type' in 
schema['properties']['floating_ip_pools']['items']['properties']['name']:
  {'type': 'string'}

  On instance['floating_ip_pools'][0]['name']:
  {u'name': u'public'})

  
  
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiSFRUUCByZXNwb25zZSBib2R5IGlzIGludmFsaWQganNvbiBvciB4bWxcIiIsInRpbWVmcmFtZSI6IjM2MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc4NzEyNTY4OTl9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276207] Re: vmware driver does not validate server certificates

2014-08-12 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276207

Title:
  vmware driver does not validate server certificates

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  The VMware driver establishes connections to vCenter over HTTPS, yet
  the vCenter server certificate is not verified as part of the
  connection process.  I know this because my vCenter server is using a
  self-signed certificate which always fails certification verification.
  As a result, someone could use a man-in-the-middle attack to spoof the
  vcenter host to nova.

  The vmware driver has a dependency on Suds, which I believe also does
  not validate certificates because hartsock and I noticed it uses
  urllib.

  For reference, here is a link on secure connections in OpenStack:
  https://wiki.openstack.org/wiki/SecureClientConnections

  Assuming Suds is fixed to provide an option for certificate
  verification, next step would be to modify the vmware driver to
  provide an option to override invalid certificates (such as self-
  signed).  In other parts of OpenStack, there are options to bypass the
  certificate check with a insecure option set, or you could put the
  server's certificate in the CA store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349452] Re: apparent deadlock on lock_bridge in n-cpu

2014-08-12 Thread Davanum Srinivas (DIMS)
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349452

Title:
  apparent deadlock on lock_bridge in n-cpu

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  It's not clear if n-cpu is dying trying to acquire the lock
  lock_bridge or if it's just hanging.

  http://logs.openstack.org/08/109108/1/check/check-tempest-dsvm-
  full/4417111/logs/screen-n-cpu.txt.gz

  The logs for n-cpu stop about 15 minutes before the rest of the test
  run, and all tests doing things that require the hypervisor executed
  after that point fail with different errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354258] Re: nova-api will go wrong if AZ name has space in it when memcach is used

2014-08-11 Thread Davanum Srinivas (DIMS)
Easily reproducible

 import memcache
 mc = memcache.Client(['192.168.1.111:11211'], debug=0)
 mc.set(some_key, Some value)
True
 print mc.get(some_key)
Some value
 mc.set(some key, Some value)
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/memcache.py,
 line 584, in set
return self._set(set, key, val, time, min_compress_len)
  File 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/memcache.py,
 line 804, in _set
self.check_key(key)
  File 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/memcache.py,
 line 1062, in check_key
Control characters not allowed)
memcache.MemcachedKeyCharacterError: Control characters not allowed

** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354258

Title:
  nova-api will go wrong if AZ name has space in it when memcach is used

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Description:
  1. memcahe is enabled
  2. AZ name has space in it such as vmware region

  Then the nova-api will go wrong:
  [root@rs-144-1 init.d]# nova list
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-a26c1fd3-ce08-4875-aacf-f8db8f73b089)

  Reason:
  Memcach retrieve the AZ name as key and check it. It will raise an error if 
there are unexpected character in the key.

  LOG in /var/log/api.log

  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/availability_zones.py, line 145, in 
get_instance_availability_zone
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack az = 
cache.get(cache_key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 898, in get
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack return 
self._get('get', key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 847, in _get
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack self.check_key(key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 1065, in check_key
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack #Control 
characters not allowed)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack 
MemcachedKeyCharacterError: Control characters not allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351350] [NEW] Warnings and Errors in Document generation

2014-08-01 Thread Davanum Srinivas (DIMS)
.
2014-08-01 03:40:18.928 | 
/home/jenkins/workspace/gate-nova-docs/nova/scheduler/filters/trusted_filter.py:docstring
 of nova.scheduler.filters.trusted_filter:10: WARNING: Inline interpreted text 
or phrase reference start-string without end-string.
2014-08-01 03:40:18.930 | 
/home/jenkins/workspace/gate-nova-docs/nova/scheduler/filters/trusted_filter.py:docstring
 of nova.scheduler.filters.trusted_filter:10: WARNING: Inline interpreted text 
or phrase reference start-string without end-string.
2014-08-01 03:40:18.933 | 
/home/jenkins/workspace/gate-nova-docs/nova/scheduler/filters/trusted_filter.py:docstring
 of nova.scheduler.filters.trusted_filter:20: WARNING: Inline interpreted text 
or phrase reference start-string without end-string.
2014-08-01 03:40:18.933 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/api/openstack/compute/plugins/v3/test_servers.py:docstring
 of 
nova.tests.api.openstack.compute.plugins.v3.test_servers.ServersAllExtensionsTestCase:11:
 ERROR: Unexpected indentation.
2014-08-01 03:40:18.934 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/api/openstack/compute/plugins/v3/test_servers.py:docstring
 of 
nova.tests.api.openstack.compute.plugins.v3.test_servers.ServersAllExtensionsTestCase:13:
 ERROR: Unexpected indentation.
2014-08-01 03:40:18.934 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/api/openstack/compute/test_servers.py:docstring
 of 
nova.tests.api.openstack.compute.test_servers.ServersAllExtensionsTestCase:11: 
ERROR: Unexpected indentation.
2014-08-01 03:40:18.935 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/api/openstack/compute/test_servers.py:docstring
 of 
nova.tests.api.openstack.compute.test_servers.ServersAllExtensionsTestCase:13: 
WARNING: Definition list ends without a blank line; unexpected unindent.
2014-08-01 03:40:18.936 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/compute/test_resource_tracker.py:docstring
 of nova.tests.compute.test_resource_tracker.NoInstanceTypesInSysMetadata:3: 
ERROR: Unexpected indentation.
2014-08-01 03:40:18.937 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/compute/test_resource_tracker.py:docstring
 of nova.tests.compute.test_resource_tracker.NoInstanceTypesInSysMetadata:4: 
WARNING: Block quote ends without a blank line; unexpected unindent.
2014-08-01 03:40:18.938 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/db/test_migrations.py:docstring
 of nova.tests.db.test_migrations:21: ERROR: Unexpected indentation.
2014-08-01 03:40:18.940 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/db/test_migrations.py:docstring
 of nova.tests.db.test_migrations:22: WARNING: Block quote ends without a blank 
line; unexpected unindent.
2014-08-01 03:40:18.941 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/db/test_migrations.py:docstring
 of nova.tests.db.test_migrations:24: ERROR: Unexpected indentation.
2014-08-01 03:40:18.942 | 
/home/jenkins/workspace/gate-nova-docs/nova/tests/image_fixtures.py:docstring 
of nova.tests.image_fixtures.get_image_fixtures:10: SEVERE: Unexpected section 
title.
2014-08-01 03:40:18.942 |

** Affects: nova
 Importance: Undecided
 Assignee: Davanum Srinivas (DIMS) (dims-v)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351350

Title:
  Warnings and Errors in Document generation

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Just pick any recent docs build and you will see a ton of issues:

  Example from:
  
http://logs.openstack.org/46/46/1/check/gate-nova-docs/4f3e8c4/console.html

  2014-08-01 03:40:18.805 | 
/home/jenkins/workspace/gate-nova-docs/nova/api/openstack/compute/contrib/hosts.py:docstring
 of nova.api.openstack.compute.contrib.hosts.HostController.index:3: ERROR: 
Unexpected indentation.
  2014-08-01 03:40:18.806 | 
/home/jenkins/workspace/gate-nova-docs/nova/api/openstack/compute/contrib/hosts.py:docstring
 of nova.api.openstack.compute.contrib.hosts.HostController.index:5: WARNING: 
Block quote ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.806 | 
/home/jenkins/workspace/gate-nova-docs/nova/api/openstack/compute/plugins/v3/hosts.py:docstring
 of nova.api.openstack.compute.plugins.v3.hosts.HostController.index:6: 
WARNING: Block quote ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.806 | 
/home/jenkins/workspace/gate-nova-docs/nova/compute/resource_tracker.py:docstring
 of nova.compute.resource_tracker.ResourceTracker.resize_claim:7: ERROR: 
Unexpected indentation.
  2014-08-01 03:40:18.807 | 
/home/jenkins/workspace/gate-nova-docs/nova/compute/resource_tracker.py:docstring
 of nova.compute.resource_tracker.ResourceTracker.resize_claim:8: WARNING: 
Block quote ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.847 | 
/home/jenkins/workspace/gate-nova-docs/nova/db/sqlalchemy/api.py:docstring

[Yahoo-eng-team] [Bug 1351127] [NEW] Exception in string format operation

2014-07-31 Thread Davanum Srinivas (DIMS)
Public bug reported:

tox -e docs shows the following error

2014-07-31 22:20:04.961 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] Exception in string format 
operation
2014-07-31 22:20:04.961 23265 TRACE nova.exception Traceback (most recent call 
last):
2014-07-31 22:20:04.961 23265 TRACE nova.exception   File 
/Users/dims/openstack/nova/nova/exception.py, line 118, in __init__
2014-07-31 22:20:04.961 23265 TRACE nova.exception message = self.msg_fmt % 
kwargs
2014-07-31 22:20:04.961 23265 TRACE nova.exception KeyError: u'flavor_id'
2014-07-31 22:20:04.961 23265 TRACE nova.exception 
2014-07-31 22:20:04.962 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] reason: 
2014-07-31 22:20:04.962 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] code: 404

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351127

Title:
  Exception in string format operation

Status in OpenStack Compute (Nova):
  New

Bug description:
  tox -e docs shows the following error

  2014-07-31 22:20:04.961 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] Exception in string format 
operation
  2014-07-31 22:20:04.961 23265 TRACE nova.exception Traceback (most recent 
call last):
  2014-07-31 22:20:04.961 23265 TRACE nova.exception   File 
/Users/dims/openstack/nova/nova/exception.py, line 118, in __init__
  2014-07-31 22:20:04.961 23265 TRACE nova.exception message = self.msg_fmt 
% kwargs
  2014-07-31 22:20:04.961 23265 TRACE nova.exception KeyError: u'flavor_id'
  2014-07-31 22:20:04.961 23265 TRACE nova.exception 
  2014-07-31 22:20:04.962 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] reason: 
  2014-07-31 22:20:04.962 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] code: 404

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[openstack-dev] [all] oslo.utils 0.1.1 released

2014-07-29 Thread Davanum Srinivas
The Oslo team is pleased to announce the first release of oslo.utils,
the library that replaces several utils modules from oslo-incubator:
https://github.com/openstack/oslo.utils/tree/master/oslo/utils

The new library has been uploaded to PyPI, and there is a changeset in
the queue update the global requirements list and our package mirror:
https://review.openstack.org/#/c/110380/

Documentation for the library is available on our developer docs site:
http://docs.openstack.org/developer/oslo.utils/

The spec for the graduation blueprint includes some advice for
migrating to the new library:
http://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/juno/graduate-oslo-utils.rst

Please report bugs using the Oslo bug tracker in launchpad:
http://bugs.launchpad.net/oslo

Thanks to everyone who helped with reviews and patches to make this
release possible!

Thanks,
dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1334222] Re: Multiple nova network floating IP tests failed

2014-07-28 Thread Davanum Srinivas (DIMS)
I don't see any recent hits in logstash. closing as invalid.

thanks,
dims

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334222

Title:
  Multiple nova network floating IP tests failed

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/23/95723/8/gate/gate-nova-
  python26/6856934/console.html

  https://review.openstack.org/#/c/95723/

  
  2014-06-23 20:02:02.020 | Traceback (most recent call last):
  2014-06-23 20:02:02.021 |   File nova/tests/api/ec2/test_cloud.py, line 
350, in test_associate_disassociate_address
  2014-06-23 20:02:02.021 | public_ip=address)
  2014-06-23 20:02:02.021 |   File nova/api/ec2/cloud.py, line 1285, in 
disassociate_address
  2014-06-23 20:02:02.021 | address=public_ip)
  2014-06-23 20:02:02.022 |   File nova/network/api.py, line 46, in wrapped
  2014-06-23 20:02:02.022 | return func(self, context, *args, **kwargs)
  2014-06-23 20:02:02.022 |   File nova/network/base_api.py, line 61, in 
wrapper
  2014-06-23 20:02:02.023 | res = f(self, context, *args, **kwargs)
  2014-06-23 20:02:41.715 |   File nova/network/api.py, line 212, in 
disassUnimplemented block at relaxng.c:3824
  2014-06-23 20:07:25.172 | ociate_floating_ip
  2014-06-23 20:07:25.172 | affect_auto_assigned)
  2014-06-23 20:07:25.173 |   File nova/utils.py, line 952, in wrapper
  2014-06-23 20:07:25.173 | return func(*args, **kwargs)
  2014-06-23 20:07:25.173 |   File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/server.py,
 line 139, in inner
  2014-06-23 20:07:25.174 | return func(*args, **kwargs)
  2014-06-23 20:07:25.174 |   File nova/network/floating_ips.py, line 424, in 
disassociate_floating_ip
  2014-06-23 20:07:25.174 | raise 
exception.FloatingIpNotAssociated(address=floating_address)
  2014-06-23 20:07:25.175 | FloatingIpNotAssociated: Floating ip 10.10.10.10 is 
not associated.
  2014-06-23 20:07:25.175 | 
==
  2014-06-23 20:07:25.175 | FAIL: 
nova.tests.db.test_db_api.FloatingIpTestCase.test_floating_ip_disassociate
  2014-06-23 20:07:25.176 | tags: worker-6
  2014-06-23 20:07:25.176 | 
--
  2014-06-23 20:07:25.176 | Empty attachments:
  2014-06-23 20:07:25.176 |   pythonlogging:''
  2014-06-23 20:07:25.177 |   stderr
  2014-06-23 20:07:25.177 |   stdout
  2014-06-23 20:07:25.177 | 
  2014-06-23 20:07:25.178 | Traceback (most recent call last):
  2014-06-23 20:07:25.178 |   File nova/tests/db/test_db_api.py, line 3939, 
in test_floating_ip_disassociate
  2014-06-23 20:07:25.178 | self.assertEqual(fixed.address, fixed_addr)
  2014-06-23 20:07:25.179 | AttributeError: 'NoneType' object has no attribute 
'address'
  2014-06-23 20:07:25.179 | 
==
  2014-06-23 20:07:25.179 | FAIL: 
nova.tests.db.test_db_api.FloatingIpTestCase.test_floating_ip_fixed_ip_associate
  2014-06-23 20:07:25.179 | tags: worker-6
  2014-06-23 20:07:25.180 | 
--
  2014-06-23 20:07:25.180 | Empty attachments:
  2014-06-23 20:07:25.180 |   pythonlogging:''
  2014-06-23 20:07:25.181 |   stderr
  2014-06-23 20:07:25.181 |   stdout
  2014-06-23 20:07:25.181 | 
  2014-06-23 20:07:25.182 | Traceback (most recent call last):
  2014-06-23 20:07:25.182 |   File nova/tests/db/test_db_api.py, line 3881, 
in test_floating_ip_fixed_ip_associate
  2014-06-23 20:07:25.182 | self.assertEqual(fixed_ip.id, 
updated_float_ip.fixed_ip_id)
  2014-06-23 20:07:25.182 |   File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
  2014-06-23 20:07:25.183 | self.assertThat(observed, matcher, message)
  2014-06-23 20:07:25.183 |   File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
  2014-06-23 20:07:25.183 | raise mismatch_error
  2014-06-23 20:07:25.184 | MismatchError: 1 != None
  2014-06-23 20:07:25.184 | 
==
  2014-06-23 20:07:25.184 | FAIL: 
nova.tests.db.test_db_api.FloatingIpTestCase.test_floating_ip_get_by_fixed_address
  2014-06-23 20:07:25.185 | tags: worker-1
  2014-06-23 20:07:25.185 | 
--
  2014-06-23 20:07:25.185 | Empty attachments:
  2014-06-23 20:07:25.186 |   pythonlogging:''
  2014-06-23 20:07:25.186 |   stderr
  2014-06-23 20:07:25.186 |   stdout
  2014-06-23 20:07:25.186 | 
  2014-06-23 20:07:25.187 | Traceback (most recent call last):
  2014-06-23 20:07:25.187 

[Yahoo-eng-team] [Bug 1290566] Re: nova detach results in TypeError: 'NoneType' object is unsubscriptable

2014-07-28 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290566

Title:
  nova detach results in  TypeError: 'NoneType' object is
  unsubscriptable

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When detaching multiple volumes from multiple instances the stack
  trace below was encountered. The db shows a device name present for
  all volumes attempted to be detached (also below).

  A similar stack trace was found at:
  http://pastebin.com/sDR2AVGH

  
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp **args)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 93, in wrapped
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp 
payload)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 76, in wrapped
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 247, in 
decorated_function
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp pass
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 233, in 
decorated_function
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 275, in 
decorated_function
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 262, in 
decorated_function
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 3784, in 
detach_volume
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp 
self._detach_volume(context, instance, bdm)
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 3722, in 
_detach_volume
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp mp = 
bdm['device_name']
  2014-03-10 07:59:38.220 22304 TRACE nova.openstack.common.rpc.amqp TypeError: 
'NoneType' object is unsubscriptable

  [root@ci-5254003B9180 nova]# nova list
  
+--+--+++-+--+
  | ID   | Name 
| Status | Task State | Power State | Networks |
  
+--+--+++-+--+
  | 1d80f083-d64b-492e-90fe-f92da31efb07 | 
3nsg-T5-313eff9a-7958-4a28-9188-7c9ac6ab29f8 | ACTIVE | None   | Running
 | public=16.125.108.48 |
  | 89a8c95c-9831-4645-acaf-3879b4ac9046 | 
3nsg-T5-4b06bfe3-2507-4a5c-8d47-9d66af1b1383 | ACTIVE | None   | Running
 | public=16.125.108.47 |
  | 5c6fdac4-01e9-4e1c-9d15-dde9bc4f1b77 | 
3nsg-T5-893f6fd3-93a7-4abb-91cf-5fee1474746d | ACTIVE | None   | Running
 | public=16.125.108.51 |
  
+--+--+++-+--+

  [root@ci-5254003B9180 nova]# cinder list
  
+--+---+-+--+-+--+-+
  |  ID  |   Status  | Display 
Name| Size | Volume Type | Bootable | Attached to |
  

Re: [openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread Davanum Srinivas
+1 to move pycadf to Identity program.

-- dims

On Fri, Jul 25, 2014 at 3:18 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Jul 25, 2014, at 2:09 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 Hello everyone,

 This is me waving my arms around trying to gather feedback on a change in
 scope that seems agreeable to a smaller audience. Most recently, we've
 discussed this in both the Keystone [1] and Oslo [2] weekly meetings.

 tl;dr it seems to make sense to move the PyCADF library from the oslo
 program to the Identity program, and increase the scope of the Identity
 program's mission statement accordingly.

 I've included ceilometer on this thread since I believe it was originally
 proposed that PyCADF be included in that program, but was subsequently
 rejected.

 Expand scope of the Identity program to include auditing:
 https://review.openstack.org/#/c/109664/


 I think this move makes sense. It provides a good home with interested
 contributors for PyCADF, and if it makes it easier for the Identity team to
 manage cross-repository changes then that’s a bonus.

 Before we move ahead, I would like to hear from the other current pycadf and
 oslo team members, especially Gordon since he is the primary maintainer.

 Doug


 As a closely related but subsequent (and dependent) change, I've proposed
 renaming the Identity program to better reflect the new scope:
 https://review.openstack.org/#/c/108739/


 The commit messages on these two changes hopefully explain the reasoning
 behind each change, if it's not already self-explanatory. Although only the
 TC has voting power in this repo, your review comments and replies on this
 thread are equally welcome.

 As an important consequence, Doug suggested maintaining pycadf-core [3] as a
 discrete core group focused on the library during today's oslo meeting. If
 any other program/project has gone through a similar process, I'd be
 interested in hearing about the experience if there's anything we can learn
 from. Otherwise, Doug's suggestion sounds like a reasonable approach to me.

 [1]
 http://eavesdrop.openstack.org/meetings/keystone/2014/keystone.2014-07-22-18.02.html

 [2]
 http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-25-16.00.html

 [3] https://review.openstack.org/#/admin/groups/192,members

 Thanks!

 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1348368] [NEW] ERROR(s) and WARNING(s) during tox -e docs

2014-07-24 Thread Davanum Srinivas (DIMS)
/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:606: WARNING: undefined label: 
relationship_primaryjoin (if the link has no caption the label must precede a 
section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:61: WARNING: undefined label: 
declarative_configuring_relationships (if the link has no caption the label 
must precede a section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:621: WARNING: undefined label: 
unitofwork_cascades (if the link has no caption the label must precede a 
section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:645: WARNING: undefined label: 
relationships_one_to_one (if the link has no caption the label must precede a 
section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:77: WARNING: undefined label: 
association_pattern (if the link has no caption the label must precede a 
section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:88: WARNING: undefined label: 
relationships_many_to_many (if the link has no caption the label must precede a 
section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:91: WARNING: undefined label: 
orm_tutorial_many_to_many (if the link has no caption the label must precede a 
section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:94: WARNING: undefined label: 
self_referential_many_to_many (if the link has no caption the label must 
precede a section header)
   1 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:97: WARNING: undefined label: 
declarative_many_to_many (if the link has no caption the label must precede a 
section header)
   1 WARNING: dot command 'dot' cannot be run (needed for graphviz output), 
check the graphviz_dot setting
   2 checking consistency... 
/Users/dims/openstack/nova/doc/source/man/nova-all.rst:: WARNING: document 
isn't included in any toctree
   1 copying static files... WARNING: html_static_path entry 
u'/Users/dims/openstack/nova/doc/source/_static' does not exist

** Affects: nova
 Importance: Medium
 Assignee: Davanum Srinivas (DIMS) (dims-v)
 Status: New

** Attachment added: Full logs from tox -e docs run
   https://bugs.launchpad.net/bugs/1348368/+attachment/4162204/+files/docs.log

** Changed in: nova
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348368

Title:
  ERROR(s) and WARNING(s) during tox -e docs

Status in OpenStack Compute (Nova):
  New

Bug description:
  ERROR(s):

  dims@dims-mac:~/openstack/nova$ grep ERROR ~/junk/docs.log | sort | uniq -c
 2 /Users/dims/openstack/nova/nova/compute/manager.py:docstring of 
nova.compute.manager.ComputeVirtAPI.wait_for_instance_event:24: ERROR: 
Unexpected indentation.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:100: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:110: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:135: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:138: ERROR: Unknown interpreted text 
role paramref.
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:143: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:156: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:190: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:228: ERROR: Unknown interpreted text 
role paramref.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:233: ERROR: Unknown interpreted text 
role paramref.
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy

Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Davanum Srinivas
I agree with Ben. ( I don't want to set a precedent where we make a
bunch of changes on Github and then import that code )

-- dims

On Wed, Jul 23, 2014 at 3:49 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-07-23 13:25, gordon chung wrote:

 I left a comment on one of the commits, but in general here are my
 thoughts:
 1) I would prefer not to do things like switch to oslo.i18n outside of
 Gerrit.  I realize we don't have a specific existing policy for this, but
 doing that significant
 work outside of Gerrit is not desirable IMHO.  It needs to happen either
 before graduation or after import into Gerrit.
 2) I definitely don't want to be accepting enable [hacking check]
 changes outside Gerrit.  The github graduation step is _just_ to get the
 code in shape so it
 can be imported with the tests passing.  It's perfectly acceptable to me
 to just ignore any hacking checks during this step and fix them in Gerrit
 where, again,
 the changes can be reviewed.
 At a glance I don't see any problems with the changes that have been made,
 but I haven't looked that closely and I think it brings up some topics for
 clarification in the graduation process.


 i'm ok to revert if there are concerns. i just vaguely remember a reference
 in another oslo lib about waiting for i18n graduation but tbh i didn't
 actually check back to see what conclusion was.


 cheers,
 gord

 I have no specific concerns, but I don't want to set a precedent where we
 make a bunch of changes on Github and then import that code.  The work on
 Github should be limited to the minimum necessary to get the unit tests
 passing (basically if it's not listed in
 https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes then
 it should happen in Gerrit).  Once that happens the project can be imported
 and any further changes made under our standard review process.  Either that
 or changes can be made in incubator before graduation and reviewed then.

 So I guess I'm a soft -1 on this for right now, but I'll defer to the other
 Oslo cores because I don't really have time to take a more detailed look at
 the repo and I don't want to be a blocker when I may not be around to
 discuss it.

 -Ben



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Davanum Srinivas
Yuriy,
Hop onto #openstack-oslo, that's where we hang out.

Ben,
I can help as well.

thanks,
-- dims

On Tue, Jul 22, 2014 at 6:38 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello, Ben.

 On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 The oslo.serialization and oslo.concurrency graduation specs are both
 approved, but unfortunately I haven't made as much progress on them as I
 would like.  The serialization repo has been created and has enough acks
 to continue the process, and concurrency still needs to be started.

 Also unfortunately, I am unlikely to make progress on either over the
 next two weeks due to the tripleo meetup and vacation.  As discussed in
 the Oslo meeting last week

 (http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html)
 we would like to continue work on them during that time, so Doug asked
 me to look for volunteers to pick up the work and run with it.

 The current status and next steps for oslo.serialization can be found in
 the bp:
 https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

 As mentioned, oslo.concurrency isn't started and has a few more pending
 tasks, which are enumerated in the spec:

 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

 Any help would be appreciated.  I'm happy to pick this back up in a
 couple of weeks, but if someone could shepherd it along in the meantime
 that would be great!


 I would be happy to work on graduating oslo.concurrency as well as improving
 it after that. I like fiddling with OS's, threads and races :)
 I can also help to finish work on oslo.serialization (it looks like some
 steps are already finished there).

 What would be needed to start working on that? I haven't been following
 development of processes within Oslo. So I would need someone to answer
 questions as they arise.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Some help needed on these darn API sample tests

2014-07-19 Thread Davanum Srinivas
Jay,

I got this far - https://review.openstack.org/#/c/108220/ - there are
a handful of failures in ImagesControllerTest left. hope this helps.

-- dims

On Sat, Jul 19, 2014 at 4:35 PM, Jay Pipes jaypi...@gmail.com wrote:
 Getting very frustrated, hoping someone can walk me back from the cliff.

 I am trying to fix this simple bug:

 https://bugs.launchpad.net/nova/+bug/1343080

 The fix for it is ridiculously simple. It is removing this line:

 https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/views/images.py#L121

 Removing that one line breaks around 26 tests, 25 of which are related to
 the API sample tests in the nova/tests/integrated/ directory. I expected
 this, and ran the unit test suite after removing the one line above
 specifically to identify the places that would need to be changed.

 I went to all of the API sample template files in
 /nova/tests/integrated/api_samples and removed the offending piece of the
 changed URI. A total of 14 API sample template files needed to be changed
 (something quite ridiculous in my opinion):

 http://paste.openstack.org/show/87225/

 When I re-ran the tests, all of them failed again. Suffice to say, the test
 failure outputs are virtually pointless, as they have a misleading error
 message:

 http://paste.openstack.org/show/87226/

 It's not that there are extra list items in the template, which is what the
 failure message says. The problem is that the API samples from the templates
 into the /doc/api_samples/ directory were not re-generated when the
 templates changed.

 The README in the /nova/tests/integrated/api_samples/ directory has this
 useful advice:

 == start ==

 Then run the following command:

 GENERATE_SAMPLES=True tox -epy27 nova.tests.integrated
 Which will create the files on doc/api_samples.

 If new tests are added or the .tpl files are changed due to bug fixes, the
 samples must be regenerated so they are in sync with the templates, as there
 is an additional test which reloads the documentation and ensures that it's
 in sync.

 Debugging sample generation

 If a .tpl is changed, its matching .xml and .json must be removed else the
 samples won't be generated. If an entirely new extension is added, a
 directory for it must be created before its samples will be generated.

 == end ==

 So, I took the advice of the README and deleted the api samples in question
 from the /docs/api_samples/ folder:

 rm doc/api_samples/images-*
 rm doc/api_samples/OS-DCF/image-*

 And then ran the tox invocation from above:

 GENERATE_SAMPLES=True tox -epy27 nova.tests.integrated.test_api_samples

 Unfortunately, the above bombed out in testr and produces a bunch of
 garbage, which I am uncertain how to debug:

 http://paste.openstack.org/show/87227/

 Would someone mind giving me a hand on this? I'd really appreciate it. It
 really should not be this hard to make such a simple fix to the API code. :(

 Thanks in advance,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1343476] [NEW] Caught error: Timed out waiting for a reply to message ID

2014-07-17 Thread Davanum Srinivas (DIMS)
Public bug reported:

Example from:
http://logs.openstack.org/31/107131/1/check/check-tempest-dsvm-full/8b0f318/logs/screen-n-api.txt.gz?level=ERROR

2014-07-17 16:29:03.172 ERROR nova.api.openstack 
[req-bb0eadba-6d11-48b7-9726-92b879ce851c FloatingIPsTestXML-1607416206 
FloatingIPsTestXML-372584893] Caught error: Timed out waiting for a reply to 
message ID 967a3cb4801442f3976b329451d26feb
2014-07-17 16:30:03.612 ERROR nova.api.openstack 
[req-400ecef0-e034-4460-9d4b-94f257c5bf5c FloatingIPsTestXML-1607416206 
FloatingIPsTestXML-372584893] Caught error: Timed out waiting for a reply to 
message ID d85870e6982648e6be8fc3455ffa60df
2014-07-17 16:31:03.784 ERROR nova.api.openstack 
[req-abd6a54b-d503-4aa1-a1f5-30ceb6ef8cde FloatingIPsTestXML-1607416206 
FloatingIPsTestXML-372584893] Caught error: Timed out waiting for a reply to 
message ID 9697a2a506ba4f4997b1b58f31c5c248

Looks like nova-n-net keeled over just before these happened
http://logs.openstack.org/31/107131/1/check/check-tempest-dsvm-full/8b0f318/logs/screen-n-net.txt.gz

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343476

Title:
  Caught error: Timed out waiting for a reply to message ID

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Example from:
  
http://logs.openstack.org/31/107131/1/check/check-tempest-dsvm-full/8b0f318/logs/screen-n-api.txt.gz?level=ERROR

  2014-07-17 16:29:03.172 ERROR nova.api.openstack 
[req-bb0eadba-6d11-48b7-9726-92b879ce851c FloatingIPsTestXML-1607416206 
FloatingIPsTestXML-372584893] Caught error: Timed out waiting for a reply to 
message ID 967a3cb4801442f3976b329451d26feb
  2014-07-17 16:30:03.612 ERROR nova.api.openstack 
[req-400ecef0-e034-4460-9d4b-94f257c5bf5c FloatingIPsTestXML-1607416206 
FloatingIPsTestXML-372584893] Caught error: Timed out waiting for a reply to 
message ID d85870e6982648e6be8fc3455ffa60df
  2014-07-17 16:31:03.784 ERROR nova.api.openstack 
[req-abd6a54b-d503-4aa1-a1f5-30ceb6ef8cde FloatingIPsTestXML-1607416206 
FloatingIPsTestXML-372584893] Caught error: Timed out waiting for a reply to 
message ID 9697a2a506ba4f4997b1b58f31c5c248

  Looks like nova-n-net keeled over just before these happened
  
http://logs.openstack.org/31/107131/1/check/check-tempest-dsvm-full/8b0f318/logs/screen-n-net.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343613] [NEW] Deadlock found when trying to get lock; try restarting transaction

2014-07-17 Thread Davanum Srinivas (DIMS)
Public bug reported:

Example URL:
http://logs.openstack.org/31/107131/1/gate/gate-grenade-dsvm/d019d8e/logs/old/screen-n-api.txt.gz?level=ERROR#_2014-07-17_20_59_37_031

Logstash query(?):
message:Deadlock found when trying to get lock; try restarting transaction 
AND loglevel:ERROR AND build_status:FAILURE

32 hits in 48 hours.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343613

Title:
  Deadlock found when trying to get lock; try restarting transaction

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Example URL:
  
http://logs.openstack.org/31/107131/1/gate/gate-grenade-dsvm/d019d8e/logs/old/screen-n-api.txt.gz?level=ERROR#_2014-07-17_20_59_37_031

  Logstash query(?):
  message:Deadlock found when trying to get lock; try restarting transaction 
AND loglevel:ERROR AND build_status:FAILURE

  32 hits in 48 hours.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-16 Thread Davanum Srinivas
Ben,

LGTM as well. i was finally able to look :)

-- dims

On Wed, Jul 16, 2014 at 4:35 AM, Flavio Percoco fla...@redhat.com wrote:
 On 07/15/2014 07:42 PM, Ben Nemec wrote:
 And the link, since I forgot it before:
 https://github.com/cybertron/oslo.serialization


 LGTM!

 Thanks for working on this!

 On 07/14/2014 04:59 PM, Ben Nemec wrote:
 Hi oslophiles,

 I've (finally) started the graduation of oslo.serialization, and I'm up
 to the point of having a repo on github that passes the unit tests.

 I realize there is some more work to be done (e.g. replacing all of the
 openstack.common files with libs) but my plan is to do that once it's
 under Gerrit control so we can review the changes properly.

 Please take a look and leave feedback as appropriate.  Thanks!

 -Ben



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.vmware] Updates

2014-07-16 Thread Davanum Srinivas
Very cool Gary.

On Wed, Jul 16, 2014 at 5:31 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 I just thought it would be nice to give the community a little update about
 the current situation:

 Version is 0.4
 (https://github.com/openstack/requirements/blob/master/global-requirements.txt#L58)

 This is used by glance and ceilometer
 There is a patch in review for Nova to integrate with this -
 https://review.openstack.org/#/c/70175/.

 Current version in development will have the following highlights:

 Better support for suds faults
 Support for VC extensions – this enables for example Nova to mark a VM as
 being owned by OpenStack
 Retry mechanism is ‘TaskInProgress’ exception is thrown

 Thanks
 Gary


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1334233] Re: compute_manager network allocation retries not handled properly

2014-07-15 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334233

Title:
  compute_manager network allocation retries not handled properly

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  in the manager.py, ComputeManager has a method
  _allocate_network_async and uses the CONF parameter
  network_allocate_retries. While this method retries, the logic used
  is not proper as listed below:

  retry_time *= 2
  if retry_time  30:
  retry_time = 30

  This bug is filed to correct it as follows:
  if retry_time  30:
  retry_time = 30
  else
  retry_time *= 2

  This will avoid the calculation of retry time out when the timeout
  reaches beyond 30 sec.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-14 Thread Davanum Srinivas
w00t! will do

thanks,
dims

On Mon, Jul 14, 2014 at 5:59 PM, Ben Nemec openst...@nemebean.com wrote:
 Hi oslophiles,

 I've (finally) started the graduation of oslo.serialization, and I'm up
 to the point of having a repo on github that passes the unit tests.

 I realize there is some more work to be done (e.g. replacing all of the
 openstack.common files with libs) but my plan is to do that once it's
 under Gerrit control so we can review the changes properly.

 Please take a look and leave feedback as appropriate.  Thanks!

 -Ben

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1213149] Re: boot an instance will fail if use --hint group=XXXXX

2014-07-10 Thread Davanum Srinivas (DIMS)
*** This bug is a duplicate of bug 1303360 ***
https://bugs.launchpad.net/bugs/1303360

** This bug has been marked a duplicate of bug 1303360
   GroupAntiAffinityFilter scheduler hint still doesn't work

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213149

Title:
  boot an instance will fail if use --hint group=X

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When use command nova boot --image image name --flavor flavor id --hint 
group=group uuid vm1, booting will fail.
  After debugging, I found that there's a code error in 
nova/scheduler/filter_scheduler.py#174

  173values = request_spec['instance_properties']['system_metadata']
  174values.update({'group': group})
  175values = {'system_metadata': values}

  values is not dict, can not use `update` method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336760] [NEW] Routes is in an unsupported or invalid wheel

2014-07-02 Thread Davanum Srinivas (DIMS)
Public bug reported:

gate-keystone-python33 and gate-oslo-incubator-python33 are failing

2014-07-02 03:43:50.194 | Cleaning up...
2014-07-02 03:43:50.194 | Routes is in an unsupported or invalid wheel
2014-07-02 03:43:50.194 | Storing debug log for failure in 
/home/jenkins/.pip/pip.log

Example url:
http://logs.openstack.org/95/99695/6/check/gate-oslo-incubator-python33/2865ef3/console.html#_2014-07-02_03_43_50_194

logstash url:
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIlJvdXRlcyBpcyBpbiBhbiB1bnN1cHBvcnRlZCBvciBpbnZhbGlkIHdoZWVsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDQzMDE2NDgxMTV9

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1336760

Title:
  Routes is in an unsupported or invalid wheel

Status in OpenStack Identity (Keystone):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  gate-keystone-python33 and gate-oslo-incubator-python33 are failing

  2014-07-02 03:43:50.194 | Cleaning up...
  2014-07-02 03:43:50.194 | Routes is in an unsupported or invalid wheel
  2014-07-02 03:43:50.194 | Storing debug log for failure in 
/home/jenkins/.pip/pip.log

  Example url:
  
http://logs.openstack.org/95/99695/6/check/gate-oslo-incubator-python33/2865ef3/console.html#_2014-07-02_03_43_50_194

  logstash url:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIlJvdXRlcyBpcyBpbiBhbiB1bnN1cHBvcnRlZCBvciBpbnZhbGlkIHdoZWVsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDQzMDE2NDgxMTV9

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1336760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334651] Re: Nova api service outputs error messages when SIGHUP signal is sent

2014-07-02 Thread Davanum Srinivas (DIMS)
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334651

Title:
  Nova api service outputs error messages when SIGHUP signal is sent

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  When SIGHUP signal is send to nova-api service, it stops all the nova-
  api processes and while restarting the nova-api processes, it throws
  AttributeError: 'WSGIService' object has no attribute 'reset'.

  
  2014-06-24 15:52:55.185 CRITICAL nova [-] AttributeError: 'WSGIService' 
object has no attribute 'reset'

  2014-06-24 15:52:55.185 TRACE nova Traceback (most recent call last):
  2014-06-24 15:52:55.185 TRACE nova   File /usr/local/bin/nova-api, line 10, 
in module
  2014-06-24 15:52:55.185 TRACE nova sys.exit(main())
  2014-06-24 15:52:55.185 TRACE nova   File /opt/stack/nova/nova/cmd/api.py, 
line 56, in main
  2014-06-24 15:52:55.185 TRACE nova launcher.launch_service(server, 
workers=server.workers or 1)
  2014-06-24 15:52:55.185 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 340, in launch_service
  2014-06-24 15:52:55.185 TRACE nova self._start_child(wrap)
  2014-06-24 15:52:55.185 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 324, in _start_child
  2014-06-24 15:52:55.185 TRACE nova launcher.restart()
  2014-06-24 15:52:55.185 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 145, in restart
  2014-06-24 15:52:55.185 TRACE nova self.services.restart()
  2014-06-24 15:52:55.185 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 478, in restart
  2014-06-24 15:52:55.185 TRACE nova restart_service.reset()
  2014-06-24 15:52:55.185 TRACE nova AttributeError: 'WSGIService' object has 
no attribute 'reset'
  2014-06-24 15:52:55.185 TRACE nova 

  Steps to reproduce:
  1. Run nova-api service as daemon.
  2. Send SIGHUP signal to nova-api service
 kill -1 parent_process_id_of_nova_api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Bug 1254872] Re: libvirtError: Timed out during operation: cannot acquire state change lock

2014-07-01 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Triaged = Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1254872

Title:
  libvirtError: Timed out during operation: cannot acquire state change
  lock

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254872/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Yahoo-eng-team] [Bug 1261644] Re: baremetal deploys can leak iscsi sessions

2014-07-01 Thread Davanum Srinivas (DIMS)
Looks like this was fixed in ironic

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261644

Title:
  baremetal deploys can leak iscsi sessions

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  There is a hole in the error handling somewhere - failed deploys /
  daemon restarts can leak sessions:

  iscsiadm -m node
  10.10.16.173:3260,1 iqn-b37386b2-fa29-4836-9ee7-f638dfca1ac9
  10.10.16.177:3260,1 iqn-7b9e5f09-7134-4e7f-92b8-31347456dd9f
  10.10.16.178:3260,1 iqn-5aa23554-913b-448d-be97-caae79b75a1b
  10.10.16.181:3260,1 iqn-a79e34e8-bca6-46e7-8f3c-ae0e6306a13e
  10.10.16.175:3260,1 iqn-627b7f63-8018-46c0-92fc-55b5abf6a1ae
  10.10.16.171:3260,1 iqn-ec3364d0-231a-4ed3-a611-de85223effc4
  10.10.16.179:3260,1 iqn-8abac231-d77d-47b9-ab37-b80da35d4410
  10.10.16.176:3260,1 iqn-87a5e9a0-6b0f-4c18-a82d-5373bd8bfad3
  10.10.16.172:3260,1 iqn-300cf804-aa47-4322-8ec0-a49c3ca121a3
  10.10.16.174:3260,1 iqn-2ad0f967-c952-4d56-b386-e05f4376fdd2
  10.10.16.171:3260,1 iqn-c4f1c80f-1a22-42b3-b984-0dee772dd44d
  10.10.16.180:3260,1 iqn-4811f3f7-0aac-4fd5-a887-4763862efc88

  and 
  sdb  8:16   0   1.8T  0 disk 
  ├─sdb1   8:17   0  1000G  0 part 
  ├─sdb2   8:18   0   7.9M  0 part 
  └─sdb3   8:19   0   500G  0 part 
  sdc  8:32   0   1.8T  0 disk 
  ├─sdc1   8:33   0  1000G  0 part 
  ├─sdc2   8:34   0   7.9M  0 part 
  └─sdc3   8:35   0   500G  0 part 

  were leaked on our undercloud node.

  Fixing should involve straight forward auditing of all codepaths to
  ensure appropriate cleanup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1261644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275500] Re: Cannot reboot instance: Network filter not found: Could not find filter

2014-07-01 Thread Davanum Srinivas (DIMS)
Still no sign of this

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275500

Title:
  Cannot reboot instance: Network filter not found: Could not find
  filter

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The issue seen with nova network.

  test_reboot_server_hard tempest test failed.
  
http://logs.openstack.org/58/68058/3/check/check-tempest-dsvm-full/5b5e95d/console.html
  2014-02-02 10:26:05.058 | Traceback (most recent call last):
  2014-02-02 10:26:05.058 |   File 
tempest/api/compute/servers/test_server_actions.py, line 81, in 
test_reboot_server_hard
  2014-02-02 10:26:05.058 | 
self.client.wait_for_server_status(self.server_id, 'ACTIVE')
  2014-02-02 10:26:05.058 |   File 
tempest/services/compute/json/servers_client.py, line 163, in 
wait_for_server_status
  2014-02-02 10:26:05.058 | raise_on_error=raise_on_error)
  2014-02-02 10:26:05.058 |   File tempest/common/waiters.py, line 74, in 
wait_for_server_status
  2014-02-02 10:26:05.058 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2014-02-02 10:26:05.058 | BuildErrorException: Server 
31b23b53-34e4-4e77-8492-ef6c301d2477 failed to build and is in ERROR status

  
http://logs.openstack.org/58/68058/3/check/check-tempest-dsvm-full/5b5e95d/logs/screen-n-cpu.txt.gz#_2014-02-02_09_53_15_466
  2014-02-02 09:53:15.837 ERROR nova.openstack.common.rpc.amqp 
[req-05280e6c-a093-4449-ba63-c4e3a8bf8e4f ServerActionsTestJSON-465780825 
ServerActionsTestJSON-704935155] Exception during message handling
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
_process_data
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp **args)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/rpc/dispatcher.py, line 172, in 
dispatch
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/exception.py, line 90, in wrapped
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp 
payload)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, in __exit__
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/exception.py, line 73, in wrapped
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 241, in decorated_function
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp pass
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, in __exit__
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 227, in decorated_function
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 292, in decorated_function
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 269, in decorated_function
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, in __exit__
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/new/nova/nova/compute/manager.py, line 256, in decorated_function
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-02-02 09:53:15.837 28606 TRACE nova.openstack.common.rpc.amqp   File 

Re: [openstack-dev] [oslo] Adopting pylockfile

2014-06-23 Thread Davanum Srinivas
Julien,

Thanks. +1 from me.

On Mon, Jun 23, 2014 at 9:41 AM, Julien Danjou jul...@danjou.info wrote:
 Hi there,

 We discovered a problem in pylockfile recently, and after discussing
 with its current maintainer, it appears that more help and workforce
 would be require:

   https://github.com/smontanaro/pylockfile/issues/11#issuecomment-45634012

 Since we are using it via oslo lockutils module, I proposed to adopt
 this project under the Oslo program banner. The review to copy the
 repository to our infrastructure is up at:

   https://review.openstack.org/#/c/101911/

 Cheers,
 --
 Julien Danjou
 ;; Free Software hacker
 ;; http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Future of Vim and PBM in Nova: summary of IRC discussion

2014-06-20 Thread Davanum Srinivas
+1 to concentrate on oslo.vmware and thanks for the update!

On Fri, Jun 20, 2014 at 6:47 AM, Matthew Booth mbo...@redhat.com wrote:
 For anybody who missed it, we discussed the following 2 outstanding
 reviews yesterday:

 vmwareapi oslo.vmware library integration
 https://review.openstack.org/#/c/70175/

 VMware: initial support for SPBM
 https://review.openstack.org/#/c/6/

 The issue is that oslo.vmware already contains nascent support for SPBM,
 so these are really 2 patches trying to achieve the same thing by
 different, incompatible means.

 After some discussion, we agreed that we would abandon the Nova SPBM
 patch to concentrate on the oslo.vmware patch. This patch has
 languished, but Vui is going to bring it up to date. It also has an
 approved BP.

 We also agreed that we only want 1 refactor review queue. As the
 oslo.vmware patch touches so much code, it inevitably conflicts with the
 spawn refactor. Therefore we will either rebase oslo.vmware on to the
 spawn refactor, or vice versa. As both patch sets are primarily on Vui,
 he will make the call about which is least disruptive.

 Radoslav has identified some non-disruptive cleanup work which he
 originally put into the Nova SPBM patch. He will now move this into
 oslo.vmware. This cleanup will be 100% backwards compatible, requiring
 no client updates whatsoever to continue working.

 We also need to add some additional features to SPBM support in
 oslo.vmware. These will be added, ensuring they don't impact any
 existing Vim users, and the oslo.vmware version bumped.

 I am continuing to advocate a significant rewrite of the Vim API.
 However, as we aren't proposing any backwards incompatible changes at
 the moment there is no current incentive to bring this forward.

 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.utils and oslo.local

2014-06-19 Thread Davanum Srinivas
Hi,

Please review the following git repo(s):
https://github.com/dims/oslo.local
https://github.com/dims/oslo.utils

the corresponding specs are here:
https://review.openstack.org/#/c/98431/
https://review.openstack.org/#/c/99028/

Thanks,
dims

-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1329563] Re: test_suspend_server_invalid_state fails with 400 response

2014-06-19 Thread Davanum Srinivas (DIMS)
See notes from review 100511 - A small excerpt is here:


Basically Tempest does not pass length='' to os-getConsoleOutput API on API 
tests.

In the above log, there were two negative factors.

* Timeout happened before this error(exceptions.BadRequest) happened.
* To investigate the timeout reason, Tempest tried to get console log with 
length=None.
But in Tempest, XML client converts None to '' internally, this error happened.

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329563

Title:
  test_suspend_server_invalid_state fails with 400 response

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Encountered what looks like a new gate failure.  The
  test_suspend_server_invalid_state test fails with a bad request
  response / unhandled exception.

  http://logs.openstack.org/48/96548/1/gate/gate-tempest-dsvm-postgres-
  full/fa5c27d/console.html#_2014-06-12_23_33_59_830

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [all] All Clear on the western front (i.e. gate)

2014-06-18 Thread Davanum Srinivas
w00t! thanks for the hard work everyone.

-- dims

On Wed, Jun 18, 2014 at 7:17 AM, Sean Dague s...@dague.net wrote:
 I realized that folks may have been waiting for an 'all clear' on the
 gate situation. It was a tiring couple of weeks, so took a little while
 to get there.

 Due to a huge amount of effort, but a bunch of different people, a ton
 of bugs were squashed to get the gate back to a high pass rate -
 https://etherpad.openstack.org/p/gatetriage-june2014

 Then jeblair came back from vacation and quickly sorted out a nodepool
 bug that was starving our capacity, so now we aren't leaking deleted
 nodes the same way.

 With both those, our capacity for changes goes way up. Because we have
 more workers available at any time, and less round tripping on race
 bugs. We also dropped the Nova v3 tests, which shaved 8 minutes (on
 average) off of Tempest runs. Again, increasing throughput by getting
 nodes back into the pool faster.

 The net of all these changes is that yesterday we merged 117 patches -
 https://github.com/openstack/openstack/graphs/commit-activity (not a
 record, that's 147 in one day, but definitely a top merge day).

 So if you were holding off on reviews / code changes because of the
 state of things, you can stop now. And given the system is pretty
 healthy, now is actually a pretty good time to put and keep it under
 load to help evaluate where we stand.

 Thanks all,

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-06-18 Thread Davanum Srinivas
Doug,

For the record, yes this came up before in
https://review.openstack.org/#/c/59994. Gary and I talked about
$imagecache.image_cache_subdirectory_name when discussing about that
review.

-- dims

On Wed, Jun 18, 2014 at 10:34 AM, Gary Kotton gkot...@vmware.com wrote:


 On 6/18/14, 4:19 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote:

On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 I have encountered a problem with string substitution with the nova
 configuration file. The motivation was to move all of the glance
settings to
 their own section (https://review.openstack.org/#/c/100567/). The
 glance_api_servers had default setting that uses the current
glance_host and
 the glance port. This is a problem when we move to the Œglance¹ section.
 First and foremost I think that we need to decide on how we should
denote
 the string substitutions for group variables and then we can dive into
 implementation details. Does anyone have any thoughts on this?

 My thinking is that when we use we should use a format of
$group.key. An
 example is below.

 Original code:

 cfg.ListOpt('glance_api_servers',
 default=['$glance_host:$glance_port'],
 help='A list of the glance api servers available to
nova. '
  'Prefix with
https://urldefense.proofpoint.com/v1/url?u=https:///k=oIvRg1%2BdGAgOoM1B
IlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=t
PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0As=2ac62a772fd5bd58fa7cf7
0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
servers.
 '
  '([hostname|ip]:port)'),

 Proposed change (in the glance section):
 cfg.ListOpt('api_servers',
 default=[Œ$glance.host:$glance.port'],
 help='A list of the glance api servers available to
nova. '
  'Prefix with
https://urldefense.proofpoint.com/v1/url?u=https:///k=oIvRg1%2BdGAgOoM1B
IlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=t
PCYaurIa1F3hEMCd5LOOfvP785BZFa8M58fXpp0Lcw%3D%0As=2ac62a772fd5bd58fa7cf7
0a973956ba97f933d649fb2f95be7b7d3e18d2b086 for ssl-based glance api
servers.
 '
  '([hostname|ip]:port)¹,
 deprecated_group='DEFAULT¹,

 deprecated_name='glance_api_servers'),

 This would require some preprocessing on the oslo.cfg side to be able to
 understand the $glance is the specific group and then host is the
requested
 value int he group.

 Thanks
 Gary

Do we need to set the variable off somehow to allow substitutions that
need the literal '.' after a variable? How often is that likely to
come up?


 To be honest I think that this is a real edge case. I had a chat with
 markmc on IRC and he suggested a different approach, which I liked,
 regarding the specific patch. That is, to set the default to None and when
 the data is accessed to check if it is is None. If so then provide the
 default values.

 We may still nonetheless need something like this in the future.


Doug



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] VMware ESX Driver Deprecation

2014-06-16 Thread Davanum Srinivas
+1 to Option #2.

-- dims

On Sun, Jun 15, 2014 at 11:15 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 In the Icehouse cycle it was decided to deprecate the VMware ESX driver. The
 motivation for the decision was:

 The driver is not validated by Minesweeper
 It is not clear if there are actually any users of the driver

 Prior to jumping into the proposal we should take into account that the
 current ESX driver does not work with the following branches:

 Master (Juno)
 Icehouse
 Havana

 The above are due to VC features that were added over the course of these
 cycles.

 On the VC side the ESX can be added to a cluster and the running VM’s will
 continue to run. The problem is how that are tracked and maintained in the
 Nova DB.

 Option 1: Moving the ESX(s) into a nova managed cluster. This would require
 the nova DB entry for the instance running on the ESX to be updated to be
 running on the VC host. If the VC host restarts at point during the above
 then all of the running instances may be deleted (this is due to the fact
 that _destroy_evacuated_instances is invoked when a nova compute is started
 https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L673).
 This would be disastrous for a running deployment.

 If we do decide to go for the above option we can perform a cold migration
 of the instances from the ESX hosts to the VC hosts. The fact that the same
 instance will be running on the ESX would require us to have a ‘noop’ for
 the migration. This can be done by configuration variables but that will be
 messy. This option would require code changes.

 Option 2: Provide the administrator with tools that will enable a migration
 of the running VM’s.

 A script that will import OpenStack VM’s into the database – the script will
 detect VM’s running on a VC and import them to the database.
 A scrip that will delete VM’s running on a specific host

 The admin will use these as follows:

 Invoke the deletion script for the ESX
 Add the ESX to a VC
 Invoke the script for importing the OpenStack VM’s into the database
 Start the nova compute with the VC driver
 Terminate all Nova computes with the ESX driver

 This option requires the addition of the scripts. The advantage is that it
 does not touch any of the running code and is done out of band. A variant of
 option 2 would be to have a script that updates the host for the ESX VM’s to
 the VC host.

 Due to the fact that the code is not being run at the moment I am in favor
 of the external scripts as it will be less disruptive and not be on a
 critical path. Any thoughts or comments?

 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] Proposed new Vim/PBM api

2014-06-13 Thread Davanum Srinivas
cc'ing Gary

On Fri, Jun 13, 2014 at 11:49 AM, Matthew Booth mbo...@redhat.com wrote:
 I've proposed a new Vim/PBM api in this blueprint for oslo.vmware:
 https://review.openstack.org/#/c/99952/

 This is just the base change. However, it is a building block for adding
 a bunch of interesting new features, including:

 * Robust caching
 * Better RetrievePropertiesEx
 * WaitForUpdatesEx
 * Non-polling task waiting

 It also gives us explicit session transactions, which are a requirement
 for locking should that ever come to pass.

 Please read and discuss. There are a couple of points in there on which
 I'm actively soliciting input.

 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Gate still backed up - need assistance with nova-network logging enhancements

2014-06-12 Thread Davanum Srinivas
Hey Matt,

There is a connection pool in
https://github.com/boto/boto/blob/develop/boto/connection.py which
could be causing issues...

-- dims

On Thu, Jun 12, 2014 at 10:50 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 6/10/2014 5:36 AM, Michael Still wrote:

 https://review.openstack.org/99002 adds more logging to
 nova/network/manager.py, but I think you're not going to love the
 debug log level. Was this the sort of thing you were looking for
 though?

 Michael

 On Mon, Jun 9, 2014 at 11:45 PM, Sean Dague s...@dague.net wrote:

 Based on some back of envelope math the gate is basically processing 2
 changes an hour, failing one of them. So if you want to know how long
 the gate is, take the length / 2 in hours.

 Right now we're doing a lot of revert roulette, trying to revert things
 that we think landed about the time things went bad. I call this
 roulette because in many cases the actual issue isn't well understood. A
 key reason for this is:

 *nova network is a blackhole*

 There is no work unit logging in nova-network, and no attempted
 verification that the commands it ran did a thing. Most of these
 failures that we don't have good understanding of are the network not
 working under nova-network.

 So we could *really* use a volunteer or two to prioritize getting that
 into nova-network. Without it we might manage to turn down the failure
 rate by reverting things (or we might not) but we won't really know why,
 and we'll likely be here again soon.

  -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 I mentioned this in the nova meeting today also but the assocated bug for
 the nova-network ssh timeout issue is bug 1298472 [1].

 My latest theory on that one is if there could be a race/network leak in the
 ec2 third party tests in Tempest or something in the ec2 API in nova,
 because I saw this [2] showing up in the n-net logs.  My thinking is the
 tests or the API are not tearing down cleanly and eventually network
 resources are leaked and we start hitting those timeouts.  Just a theory at
 this point, but the ec2 3rd party tests do run concurrently with the
 scenario tests so things could be colliding at that point, but I haven't had
 time to dig into it, plus I have very little experience in those tests or
 the ec2 API in nova.

 [1] https://bugs.launchpad.net/tempest/+bug/1298472
 [2] http://goo.gl/6f1dfw

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-10 Thread Davanum Srinivas
Dina, Alexey,

Do you mind filing some spec(s) please?

http://markmail.org/message/yqhndsr3zrqcfwq4
http://markmail.org/message/kpk35uikcnodq3jb

thanks,
dims

On Tue, Jun 10, 2014 at 7:03 AM, Dina Belova dbel...@mirantis.com wrote:
 Hello, stackers!


 Oslo.messaging is future of how different OpenStack components communicate
 with each other, and really I’d love to start discussion about how we can
 make this library even better then it’s now and how can we refactor it make
 more production-ready.


 As we all remember, oslo.messaging was initially inspired to be created as a
 logical continuation of nova.rpc - as a separated library, with lots of
 transports supported, etc. That’s why oslo.messaging inherited not only
 advantages of now did the nova.rpc work (and it were lots of them), but also
 some architectural decisions that currently sometimes lead to the
 performance issues (we met some of them while Ceilometer performance testing
 [1] during the Icehouse).


 For instance, simple testing messaging server (with connection pool and
 eventlet) can process 700 messages per second. The same functionality
 implemented using plain kombu (without connection pool and eventlet)  driver
 is processing ten times more - 7000-8000 messages per second.


 So we have the following suggestions about how we may make this process
 better and quicker (and really I’d love to collect your feedback, folks):


 1) Currently we have main loop running in the Executor class, and I guess
 it’ll be much better to move it to the Server class, as it’ll make
 relationship between the classes easier and will leave Executor only one
 task - process the message and that’s it (in blocking or eventlet mode).
 Moreover, this will make further refactoring much easier.

 2) Some of the drivers implementations (such as impl_rabbit and impl_qpid,
 for instance) are full of useless separated classes that in reality might be
 included to other ones. There are already some changes making the whole
 structure easier [2], and after the 1st issue will be solved Dispatcher and
 Listener also will be able to be refactored.

 3) If we’ll separate RPC functionality and messaging functionality it’ll
 make code base clean and easily reused.

 4) Connection pool can be refactored to implement more efficient connection
 reusage.


 Folks, are you ok with such a plan? Alexey Kornienko already started some of
 this work [2], but really we want to be sure that we chose the correct
 vector of development here.


 Thanks!


 [1]
 https://docs.google.com/document/d/1ARpKiYW2WN94JloG0prNcLjMeom-ySVhe8fvjXG_uRU/edit?usp=sharing

 [2]
 https://review.openstack.org/#/q/status:open+owner:akornienko+project:openstack/oslo.messaging,n,z


 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Neutron]Installing openstack on a machine with single interface

2014-06-10 Thread Davanum Srinivas
Ageeleshwar,

Do you happen to have a devstack local.conf for this specific setup?
That would be of great help to everyone i believe.

thanks,
dims

On Tue, Jun 10, 2014 at 3:54 AM, Ageeleshwar Kandavelu
ageeleshwar.kandav...@csscorp.com wrote:
 Hi All,
 I have seen several people asking how to set up openstack on a machine with
 a single nic card. I have created a blog page for the same. The blog
 includes aome information about openstack networking also.

 http://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/

 Thank you,
 Ageeleshwar K
 http://www.csscorp.com/common/email-disclaimer.php

 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova][glance] Consistency around proposed server instance tagging API

2014-06-04 Thread Davanum Srinivas
+1 to 404 for a DELETE if the tag does not exist.

There's a good discussion in this paragraph from RESTful Web APIs
book - 
http://books.google.com/books?id=wWnGQBAJlpg=PA36ots=Ff9jCI293bdq=restful%20http%20delete%20404%20sam%20rubypg=PA36#v=onepageq=restful%20http%20delete%20404%20sam%20rubyf=false

-- dims

On Wed, Jun 4, 2014 at 8:21 PM, Christopher Yeoh cbky...@gmail.com wrote:

 On Thu, Jun 5, 2014 at 4:14 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 I'm looking to get consensus on a proposed API for server instance tagging
 in Nova:

 https://review.openstack.org/#/c/91444/

 In the proposal, the REST API for the proposed server instance tagging
 looks like so:

 Remove a tag on a server:

 DELETE /v2/{project_id}/servers/{server_id}/tags/{tag}

 It is this last API call that has drawn the attention of John Garbutt
 (cc'd). In Glance v2 API, if you attempt to delete a tag that does not
 exist, then a 404 Not Found is returned. In my proposal, if you attempt to
 delete a tag that does not exist for the server, a 204 No Content is
 returned.


 I think attempting to delete a tag that doesn't exist should return a 404.
 The user can always ignore the error if they know there's a
 chance that the tag they want to get rid of doesn't exist. But for those
 that believe it should exist its an error is more useful to them as
 it may be an indicator that something is wrong with their logic.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1324479] Re: Fails to launch instance with create volume from image

2014-05-29 Thread Davanum Srinivas (DIMS)
** Project changed: nova = cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324479

Title:
  Fails to launch instance with create volume from image

Status in Cinder:
  New

Bug description:
  
  In IceHouse something has changed in Glance and when I try to launch an 
instance with option create volume from image it fails, because some 
attributes are absent in image dict returned from Glance. Different types of 
images were tried. RDO packages are used.

  From /var/log/cinder/api.log:

  2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in 
_extract_attributes
  2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault 
output[attr] = getattr(image, attr)
  2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__
  2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault raise 
AttributeError(key)
  2014-05-29 00:47:52.185 32176 TRACE cinder.api.middleware.fault 
AttributeError: owner

  The image's onwer in the database was indeed NULL (and that should be
  ok). If I add an owner to the image, then another attribute will also
  be not found:

  2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/image/glance.py, line 434, in 
_extract_attributes
  2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault 
output[attr] = getattr(image, attr)
2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/warlock/model.py, line 69, in __getattr__
2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault raise 
AttributeError(key)
2014-05-29 01:03:02.782 32176 TRACE cinder.api.middleware.fault 
AttributeError: deleted

  The image dict returned from Glance was:

  {u'status': u'active', u'tags': [], u'container_format': u'bare',
  u'min_ram': 0, u'updated_at': u'2014-05-22T13:24:49Z', u'visibility':
  u'public', u'file': u'/v2/images/ad385533-0bbb-40d8-a4db-
  669c76677e24/file', u'min_disk': 0, u'id': u'ad385533-0bbb-40d8-a4db-
  669c76677e24', u'size': 3145728, u'name': u'img04', u'checksum':
  u'a5c6d1997966f85908c5640c5dfd7b79', u'created_at':
  u'2014-05-22T13:24:48Z', u'disk_format': u'raw', u'protected': False,
  u'direct_url':
  u'rbd://2485eec9-d30a-4258-b959-937359ed61e8/images/ad385533-0bbb-40d8
  -a4db-669c76677e24/snap', u'schema': u'/v2/schemas/image'}

  I have no idea why some image attributes are absent, but one of the
  possible fixes is (for IceHouce branch):

  --- /a/cinder/image/glance.py 2014-04-21 12:58:43.0 -0700
  +++ /b/cinder/image/glance.py 2014-05-29 03:23:31.0 -0700
  @@ -431,7 +431,7 @@
   elif attr == 'checksum' and output['status'] != 'active':
   output[attr] = None
   else:
  -output[attr] = getattr(image, attr)
  +output[attr] = getattr(image, attr, None)
   
   output['properties'] = getattr(image, 'properties', {})

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1324479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Davanum Srinivas
+1 from me.

On Wed, May 28, 2014 at 11:49 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 05/28/2014 11:39 AM, Doug Hellmann wrote:

 On Wed, May 28, 2014 at 10:38 AM, Sean Dague s...@dague.net wrote:

 When attempting to build a new tool for Tempest, I found that my python
 syntax errors were being completely eaten. After 2 days of debugging I
 found that oslo log.py does the following *very unexpected* thing.

   - replaces the sys.excepthook with it's own function
   - eats the execption traceback unless debug or verbose are set to True
   - sets debug and verbose to False by default
   - prints out a completely useless summary log message at Critical
 ([CRITICAL] [-] 'id' was my favorite of these)

 This is basically for an exit level event. Something so breaking that
 your program just crashed.

 Note this has nothing to do with preventing stack traces that are
 currently littering up the logs that happen at many logging levels, it's
 only about removing the stack trace of a CRITICAL level event that's
 going to very possibly result in a crashed daemon with no information as
 to why.

 So the process of including oslo log makes the code immediately
 undebuggable unless you change your config file to not the default.

 Whether or not there was justification for this before, one of the
 things we heard loud and clear from the operator's meetup was:

   - Most operators are running at DEBUG level for all their OpenStack
 services because you can't actually do problem determination in
 OpenStack for anything  that.
   - Operators reacted negatively to the idea of removing stack traces
 from logs, as that's typically the only way to figure out what's going
 on. It took a while of back and forth to explain that our initiative to
 do that wasn't about removing them per say, but having the code
 correctly recover.

 So the current oslo logging behavior seems inconsistent (we spew
 exceptions at INFO and WARN levels, and hide all the important stuff
 with a legitimately uncaught system level crash), undebuggable, and
 completely against the prevailing wishes of the operator community.

 I'd like to change that here - https://review.openstack.org/#/c/95860/

  -Sean


 I agree, we should dump as much detail as we can when we encounter an
 unhandled exception that causes an app to die.


 +1

 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1324036] Re: Can't add authenticated iscsi volume to a vmware instance

2014-05-28 Thread Davanum Srinivas (DIMS)
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324036

Title:
  Can't add authenticated iscsi volume to a vmware instance

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  The VMware driver doesn't pass volume authentication information to
  the hba when attaching an iscsi volume. Consequently, adding an iscsi
  volume which requires authentication will always fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1324036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [nova][vmware] Current state of the spawn refactor

2014-05-27 Thread Davanum Srinivas
Hi Michael,

* Phase 1 has one review left - https://review.openstack.org/#/c/92691/
* I'll update the phase 2 patch shortly -
https://review.openstack.org/#/c/87002/
* Once the 2 reviews above get approved, we will resurrect the
oslo.vmware BP/review - https://review.openstack.org/#/c/70175/

There is a team etherpad that has a game plan we try to keep
up-to-date - https://etherpad.openstack.org/p/vmware-subteam-juno.
Based on discussions during the summit, We are hoping to get the 3
items above into juno-1. So we can work on the features mentioned in
the etherpad.

Tracy,GaryK,Others, please chime in.

thanks,
dims

On Tue, May 27, 2014 at 5:31 PM, Michael Still mi...@stillhq.com wrote:
 Hi.

 I've been looking at the current state of the vmware driver spawn
 refactor work, and as best as I can tell phase one is now complete.
 However, I can only find one phase two patch, and it is based on an
 outdated commit. That patch is:

 https://review.openstack.org/#/c/87002/

 There also aren't any phase three patches that I can see. What is the
 current state of this work? Is it still targeting juno-1?

 Thanks,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] how to best deal with default periodic task spacing behavior

2014-05-20 Thread Davanum Srinivas
@Matt,

Agree, My vote would be to change existing behavior.

-- dims

On Tue, May 20, 2014 at 10:15 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
 Between patch set 1 and patch set 3 here [1] we have different solutions to
 the same issue, which is if you don't specify a spacing value for periodic
 tasks then they run whenever the periodic task processor runs, which is
 non-deterministic and can be staggered if some tasks don't complete in a
 reasonable amount of time.

 I'm bringing this to the mailing list to see if there are more opinions out
 there, especially from operators, since patch set 1 changes the default
 behavior to have the spacing value be the DEFAULT_INTERVAL (hard-coded 60
 seconds) versus patch set 3 which makes that behavior configurable so the
 admin can set global default spacing for tasks, but defaults to the current
 behavior of running every time if not specified.

 I don't like a new config option, but I'm also not crazy about changing
 existing behavior without consensus.

 [1] https://review.openstack.org/#/c/93767/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple instances of Keystone

2014-05-14 Thread Davanum Srinivas
If you are using devstack, it's simple to enable Keystone+Apache Httpd [1]

APACHE_ENABLED_SERVICES+=key

-- dims

[1] https://github.com/openstack-dev/devstack/blob/master/README.md

On Wed, May 14, 2014 at 11:46 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 Keystone has specifically avoided including multiple process patches because
 they want to encourage apache + mod_wsgi as the standard way of scaling the
 keystone api.

 Vish

 On May 13, 2014, at 9:34 PM, Aniruddha Singh Gautam
 aniruddha.gau...@aricent.com wrote:

 Hi,

 Hope you are doing well…

 I was working on trying to apply the patch for running multiple instance of
 Keystone. Somehow it does not work with following errors, I wish to still
 debug it further, but thought that I will check with you if you can provide
 some quick help. I was following
 http://blog.gridcentric.com/?Tag=Scalability. I did the changes on Ice House
 GA.

 Error
 Traceback (most recent call last):
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/threadgroup.py,
 line 119, in wait
 x.wait()
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/threadgroup.py,
 line 47, in wait
 return self.thread.wait()
   File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168,
 in wait
 return self._exit_event.wait()
   File /usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in
 wait
 return hubs.get_hub().switch()
   File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in
 switch
 return self.greenlet.switch()
   File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194,
 in main
 result = function(*args, **kwargs)
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 449, in run_service
 service.start()
 AttributeError: 'tuple' object has no attribute 'start'
 (keystone): 2014-05-13 08:17:37,073 CRITICAL AttributeError: 'tuple' object
 has no attribute 'stop'
 Traceback (most recent call last):
   File /usr/bin/keystone-all, line 162, in module
 serve(*servers)
   File /usr/bin/keystone-all, line 111, in serve
 launcher.wait()
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 352, in wait
 self._respawn_children()
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 342, in _respawn_children
 self._start_child(wrap)
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 282, in _start_child
 status, signo = self._child_wait_for_exit_or_signal(launcher)
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 240, in _child_wait_for_exit_or_signal
 launcher.stop()
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 95, in stop
 self.services.stop()
   File
 /usr/lib/python2.7/dist-packages/keystone/openstack/common/service.py,
 line 419, in stop
 service.stop()
 AttributeError: 'tuple' object has no attribute 'stop'

 In logs I can find the new child processes, somehow probably they are
 stopped and then it spawns another child processes.

 I also noticed that support for running multiple neutron servers in ICE
 House GA. Any specific reason of not having same thing for Keystone (My
 knowledge of Openstack is limited, so please bear with my dumb questions)


 Best regards,
 Aniruddha



 DISCLAIMER: This message is proprietary to Aricent and is intended solely
 for the use of the individual to whom it is addressed. It may contain
 privileged or confidential information and should not be circulated or used
 for any purpose other than for what it is intended. If you have received
 this message in error, please notify the originator immediately. If you are
 not the intended recipient, you are notified that you are strictly
 prohibited from using, copying, altering, or disclosing the contents of this
 message. Aricent accepts no responsibility for loss or damage arising from
 the use of the information transmitted by this email including damage from
 virus.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Victor Stinner for the Oslo core reviewers team

2014-05-06 Thread Davanum Srinivas
Welcome Victor!

On Tue, May 6, 2014 at 8:38 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 After a bit more than the usual waiting period, I have added Victor
 Stinner to the oslo-core team today.

 Welcome to the team, Victor!

 On Mon, Apr 21, 2014 at 12:39 PM, Doug Hellmann
 doug.hellm...@dreamhost.com wrote:
 I propose that we add Victor Stinner (haypo on freenode) to the Oslo
 core reviewers team.

 Victor is a Python core contributor, and works on the development team
 at eNovance. He created trollius, a port of Python 3's tulip/asyncio
 module to Python 2, at least in part to enable a driver for
 oslo.messaging. He has been quite active with Python 3 porting work in
 Oslo and some other projects, and organized a sprint to work on the
 port at PyCon last week. The patches he has written for the python 3
 work have all covered backwards-compatibility so that the code
 continues to work as before under python 2.

 Given his background, skills, and interest, I think he would be a good
 addition to the team.

 Doug

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Yahoo-eng-team] [Bug 1287292] Re: VMware: vim.get_soap_url improper IPv6 address

2014-05-01 Thread Davanum Srinivas (DIMS)
We need to fix oslo.vmware as well.

** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287292

Title:
  VMware: vim.get_soap_url improper IPv6 address

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  The vim.get_soap_url function incorrectly builds an IPv6 address using
  hostname/IP and port.

  https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vim.py#L151

  The result of this line would create an address as follows:
  https://[2001:db8:85a3:8d3:1319:8a2e:370:7348:443]/sdk

  Ports should be outside the square brackets, not inside, as follows:

  https://[2001:db8:85a3:8d3:1319:8a2e:370:7348]:443/sdk

  For reference see: http://en.wikipedia.org/wiki/IPv6_address section
  Literal IPv6 addresses in network resource identifiers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300788] Re: VMware: exceptions when SOAP message has no body

2014-05-01 Thread Davanum Srinivas (DIMS)
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300788

Title:
  VMware: exceptions when SOAP message has no body

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Minesweeper logs have the following:

  2014-03-26 11:37:09.753 CRITICAL nova.virt.vmwareapi.driver 
[req-3a274ea6-e731-4bbc-a7fc-e2877a57a7cb MultipleCreateTestJSON-692822675 
MultipleCreateTestJSON-47510170] In vmwareapi: _call_method 
(session=52eb4a1e-04de-de0d-5c6a-746a430570a2)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver Traceback 
(most recent call last):
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 856, in _call_method
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver return 
temp_module(*args, **kwargs)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver   File 
/opt/stack/nova/nova/virt/vmwareapi/vim.py, line 196, in vim_request_handler
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver raise 
error_util.VimFaultException(fault_list, excep)
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
VimFaultException: Server raised fault: '
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver SOAP body not 
found
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP envelope
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver at line 1, 
column 38
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver while parsing 
HTTP request before method was determined
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver at line 1, 
column 0'
  2014-03-26 11:37:09.753 13830 TRACE nova.virt.vmwareapi.driver 
  2014-03-26 11:37:09.754 WARNING nova.virt.vmwareapi.vmops 
[req-3a274ea6-e731-4bbc-a7fc-e2877a57a7cb MultipleCreateTestJSON-692822675 
MultipleCreateTestJSON-47510170] In vmwareapi:vmops:_destroy_instance, got this 
exception while un-registering the VM: Server raised fault: '
  SOAP body not found

  while parsing SOAP envelope
  at line 1, column 38

  while parsing HTTP request before method was determined
  at line 1, column 0'

  There are cases when the suds returns a message with no body.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-04-30 Thread Davanum Srinivas
Awesome! thanks @fungi

-- dims

On Wed, Apr 30, 2014 at 6:47 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-04-29 18:01:44 + (+), Jeremy Stanley wrote:
 [...]
 I'll check with the team handling venue logistics right now and
 update this thread with options. I'm also inquiring about the
 availability of a projector if we get one of the non-design-session
 breakout rooms, and I can bring a digital micro/macroscope which
 ought to do a fair job of showing everyone's photo IDs through it if
 so (which would enable us to use The 'Sassman-Projected' Method and
 significantly increase our throughput via parallelization).

 I was able to confirm a dedicated location (room 405, just one floor
 up from the design summit sessions) Wednesday 10:30-11:00am with a
 projector and seating for 50. We're getting close to that many on
 the sign-up sheet already... chances are we'll be able to put each
 ID up on the projector for 15-20 seconds, taking buffer time into
 account at the beginning for walking to the room and reading the
 master list checksum aloud. I've asked the organizers to list this
 as OpenStack Web of Trust: Key Signing and have updated
 https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
 accordingly.
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-04-29 Thread Davanum Srinivas
How about 10:30 AM Break on Wednesday at the Developers Lounge (Do we
have one this time?)

-- dims

On Tue, Apr 29, 2014 at 10:32 AM, Thomas Goirand z...@debian.org wrote:
 On 04/29/2014 06:42 AM, Jeremy Stanley wrote:
 On 2014-04-26 17:05:41 -0700 (-0700), Clint Byrum wrote:
 Just a friendly reminder to add yourself to this list if you are
 interested in participating in the key signing in Atlanta:

 https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
 [...]

 It has a sign-up cutoff of two weeks ago, so I assume that was
 merely a placeholder. Which begs the question how late is too late
 for participants to be able to comfortably print out their copies
 (assuming we do http://keysigning.org/methods/sassaman-efficient
 this time)? Should lock down the page Wednesday May 7th? Sooner?

 Keep in mind that this week (April 27 through May 3) is supposed to
 be a vacation week for the project, so people may not be around to
 add themselves until Monday May 5th ;)

 Is there a date a place already schedule for the key signing party?

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-04-28 Thread Davanum Srinivas
+1 to lock down the page May 7th.

-- dims

On Mon, Apr 28, 2014 at 6:42 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-04-26 17:05:41 -0700 (-0700), Clint Byrum wrote:
 Just a friendly reminder to add yourself to this list if you are
 interested in participating in the key signing in Atlanta:

 https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
 [...]

 It has a sign-up cutoff of two weeks ago, so I assume that was
 merely a placeholder. Which begs the question how late is too late
 for participants to be able to comfortably print out their copies
 (assuming we do http://keysigning.org/methods/sassaman-efficient
 this time)? Should lock down the page Wednesday May 7th? Sooner?

 Keep in mind that this week (April 27 through May 3) is supposed to
 be a vacation week for the project, so people may not be around to
 add themselves until Monday May 5th ;)
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Victor Stinner for the Oslo core reviewers team

2014-04-21 Thread Davanum Srinivas
+1 from me.

On Mon, Apr 21, 2014 at 12:39 PM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 I propose that we add Victor Stinner (haypo on freenode) to the Oslo
 core reviewers team.

 Victor is a Python core contributor, and works on the development team
 at eNovance. He created trollius, a port of Python 3's tulip/asyncio
 module to Python 2, at least in part to enable a driver for
 oslo.messaging. He has been quite active with Python 3 porting work in
 Oslo and some other projects, and organized a sprint to work on the
 port at PyCon last week. The patches he has written for the python 3
 work have all covered backwards-compatibility so that the code
 continues to work as before under python 2.

 Given his background, skills, and interest, I think he would be a good
 addition to the team.

 Doug

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack/GSoC - Welcome!

2014-04-21 Thread Davanum Srinivas
Hi Team,

Please join me in welcoming the following students to our GSoC program
[1]. Congrats everyone. Now the hard work begins :) Have fun as well.

Artem Shepelev
Kumar Rishabh
Manishanker Talusani
Masaru Nomura
Prashanth Raghu
Tzanetos Balitsaris
Victoria Martínez de la Cruz

-- dims

[1] https://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    7   8   9   10   11   12   13   14   15   16   >