[Yahoo-eng-team] [Bug 1510234] Re: Heartbeats stop when time is changed

2017-02-22 Thread Roman Podoliaka
Change to Nova: https://review.openstack.org/#/c/434327/

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510234

Title:
  Heartbeats stop when time is changed

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.service:
  Fix Released

Bug description:
  Heartbeats stop working when you mess with the system time. If a
  monotonic clock were used, they would continue to work when the system
  time was changed.

  Steps to reproduce:

  1. List the nova services ('nova-manage service list'). Note that the
  'State' for each services is a happy face ':-)'.

  2. Move the time ahead (for example 2 hours in the future), and then
  list the nova services again. Note that heartbeats continue to work
  and use the future time (see 'Updated_At').

  3. Revert back to the actual time, and list the nova services again.
  Note that all heartbeats stop, and have a 'State' of 'XXX'.

  4. The heartbeats will start again in 2 hours when the actual time
  catches up to the future time, or if you restart the services.

  5. You'll see a log message like the following when the heartbeats
  stop:

  2015-10-26 17:14:10.538 DEBUG nova.servicegroup.drivers.db [req-
  c41a2ad7-e5a5-4914-bdc8-6c1ca8b224c6 None None] Seems service is down.
  Last heartbeat was 2015-10-26 17:20:20. Elapsed time is -369.461679
  from (pid=13994) is_up
  /opt/stack/nova/nova/servicegroup/drivers/db.py:80

  Here's example output demonstrating the issue:

  http://paste.openstack.org/show/477404/

  See bug #1450438 for more context:

  https://bugs.launchpad.net/oslo.service/+bug/1450438

  Long story short: looping call is using the built-in time rather than
  a  monotonic clock for sleeps.

  
https://github.com/openstack/oslo.service/blob/3d79348dae4d36bcaf4e525153abf74ad4bd182a/oslo_service/loopingcall.py#L122

  Oslo Service: version 0.11
  Nova: master (commit 2c3f9c339cae24576fefb66a91995d6612bb4ab2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1510234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657774] [NEW] Nova does not re-raise 401 Unauthorized received from Neutron for admin users

2017-01-19 Thread Roman Podoliaka
Public bug reported:

Description
===

If a Keystone token issued for a admin user (e.g. ceilometer) is expired
or revoked right after it's been validated by
keystoneauthtoken_middleware in nova-api, but before it's validated by
the very same middleware in neutron-server, nova-api will respond with
400 Bad Request instead of expected 401 Unauthorized, so that the
original request can be properly retried after re-authentication.


Steps to reproduce
==

The condition described above is easy to reproduce synthetically by
putting breakpoints into Nova code and revoking a token. One can
reproduce the very same problem in real life by running enough
ceilometer polling agents.

Make sure you use credentials of an admin user (e.g. admin or ceilometer
in Devstack) and have at least 1 instance running (so that `nova list`
triggers an HTTP request to neutron-server).

1. Put a breakpoint on entering get_client() nova/network/neutronv2/api.py
2. Do `nova list`
3. Revoke the the issued token with `openstack token revoke $token` (you may 
also need to restart memcached to make sure token validation result is not 
cached)
4. Continue execution of nova-api

Expected result
===

As token is now invalid (expired or revoked), it's expected that nova-
api responds with 401 Unauthorized, so that a client can handle this,
re-authenticate and retry the original request.

Actual result
=

nova-api responds with 400 Bad Request and outputs the following error
into logs

2017-01-19 15:02:09.952 595 ERROR nova.network.neutronv2.api 
[req-0c1558f5-9cc8-4411-9fb1-2fe7cb232725 admin admin] Neutron client was not 
able
 to generate a valid admin token, please verify Neutron admin credential 
located in nova.conf

Environment
===

Devstack, master (Ocata), nova HEAD at
da54487edad28c87accbf6439471e7341b52ff48

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: api neutron

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Tags added: api neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657774

Title:
  Nova does not re-raise 401 Unauthorized received from Neutron for
  admin users

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  If a Keystone token issued for a admin user (e.g. ceilometer) is
  expired or revoked right after it's been validated by
  keystoneauthtoken_middleware in nova-api, but before it's validated by
  the very same middleware in neutron-server, nova-api will respond with
  400 Bad Request instead of expected 401 Unauthorized, so that the
  original request can be properly retried after re-authentication.

  
  Steps to reproduce
  ==

  The condition described above is easy to reproduce synthetically by
  putting breakpoints into Nova code and revoking a token. One can
  reproduce the very same problem in real life by running enough
  ceilometer polling agents.

  Make sure you use credentials of an admin user (e.g. admin or
  ceilometer in Devstack) and have at least 1 instance running (so that
  `nova list` triggers an HTTP request to neutron-server).

  1. Put a breakpoint on entering get_client() nova/network/neutronv2/api.py
  2. Do `nova list`
  3. Revoke the the issued token with `openstack token revoke $token` (you may 
also need to restart memcached to make sure token validation result is not 
cached)
  4. Continue execution of nova-api

  Expected result
  ===

  As token is now invalid (expired or revoked), it's expected that nova-
  api responds with 401 Unauthorized, so that a client can handle this,
  re-authenticate and retry the original request.

  Actual result
  =

  nova-api responds with 400 Bad Request and outputs the following error
  into logs

  2017-01-19 15:02:09.952 595 ERROR nova.network.neutronv2.api 
[req-0c1558f5-9cc8-4411-9fb1-2fe7cb232725 admin admin] Neutron client was not 
able
   to generate a valid admin token, please verify Neutron admin credential 
located in nova.conf

  Environment
  ===

  Devstack, master (Ocata), nova HEAD at
  da54487edad28c87accbf6439471e7341b52ff48

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1657774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630899] Re: mysql 1305 errores handled differently with Mysql-Python

2016-11-03 Thread Roman Podoliaka
** Changed in: oslo.db
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630899

Title:
  mysql 1305 errores handled differently with Mysql-Python

Status in neutron:
  Confirmed
Status in oslo.db:
  Fix Released

Bug description:
  The following check 
https://review.openstack.org/#/c/326927/6/neutron/db/api.py , does not works 
when I am using:
  MySQL-python (1.2.5)
  oslo.db (4.13.3)
  SQLAlchemy (1.1.0)

  2016-10-06 04:39:20.674 16262 ERROR neutron.api.v2.resource OperationalError: 
(_mysql_exceptions.OperationalError) (1305, 'SAVEPOINT sa_savepoint_1 does not 
exist') [SQL: u'ROLLBACK TO SAVEPOINT sa_savepoint_1']
  2016-10-06 04:39:20.674 16262 ERROR neutron.api.v2.resource 

  In the log and not catches by the is_retriable , because it fails on the 
_is_nested_instance(e, db_exc.DBError) check. The exception's type is: 
   .

  
  I did not used the '+pymysql', so it is the old MySQL-python driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599086] Re: Security groups: exception under load

2016-11-03 Thread Roman Podoliaka
** Changed in: oslo.db
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599086

Title:
  Security groups: exception under load

Status in neutron:
  Won't Fix
Status in oslo.db:
  Fix Released

Bug description:
  
  For one of the iteration, adding router interface failed showing the below DB 
error.

  2016-07-04 17:12:59.057 ERROR neutron.api.v2.resource 
[req-33bb4fd7-25a5-4460-82d0-ab5e5b8d574c 
ctx_rally_8204b9df57e44bcf9804a278c35bf2a4_user_0 
8204b9df57e44bcf9804a278c35bf2a4] add_router_interface failed
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource self.force_reraise()
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 217, in _handle_action
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource ret_value = 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1509, in 
add_router_interface
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource interface=info)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/dhcp_meta/rpc.py", line 121, in 
handle_router_metadata_access
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource plugin, 
ctx_elevated, router_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/dhcp_meta/rpc.py", line 171, in 
_create_metadata_access_network
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource {'network': 
net_data})
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 452, in 
create_network
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
self._ensure_default_security_group(context, tenant_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 710, in 
_ensure_default_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource return 
self._create_default_security_group(context, tenant_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 721, in 
_create_default_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource context, 
security_group, default_sg=True)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1759, in 
create_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
firewall.delete_section(section_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource self.force_reraise()
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1736, in 
create_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource context, 
security_group, default_sg))
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 189, in 
create_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource

[Yahoo-eng-team] [Bug 1480698] Re: MySQL error - too many connections

2016-11-03 Thread Roman Podoliaka
** Changed in: oslo.db
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480698

Title:
  MySQL error - too many connections

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.db:
  Invalid

Bug description:
  http://logs.openstack.org/59/138659/33/check/gate-tempest-dsvm-
  neutron-linuxbridge/29e7adc/logs/screen-n-api.txt.gz?level=ERROR

  2015-07-21 11:29:53.660 ERROR nova.api.ec2 [req-522a314d-
  e88e-4982-b014-64141aeef73a tempest-EC2KeysTest-362858920 tempest-
  EC2KeysTest-451984995] Unexpected OperationalError raised:
  (_mysql_exceptions.OperationalError) (1040, 'Too many connections')

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631033] Re: scary traceback if nova-manage db sync is run before nova-manage api_db sync

2016-10-25 Thread Roman Podoliaka
** Changed in: oslo.db
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1631033

Title:
  scary traceback if nova-manage db sync is run before nova-manage
  api_db sync

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.db:
  Fix Released

Bug description:
  During gate runs we're running

  nova-manage db sync
  nova-manage api_db sync

  This leads to the following rather scary stack trace:

  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters [req-962d0663-409b-4e1a-8552-52ecb451a165 - -] 
DBAPIError exception wrapped from (pymysql.err.InternalError) (1049, u"Unknown 
database 'nova_api'")
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, 
in _wrap_pool_connect
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters return fn()
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 318, in 
unique_connection
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters return _ConnectionFairy._checkout(self)
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 713, in 
_checkout
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters fairy = _ConnectionRecord.checkout(pool)
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 480, in 
checkout
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters rec = pool._do_get()
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1060, in 
_do_get
  2016-10-06 13:19:15.279 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters self._dec_overflow()
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters compat.reraise(exc_type, exc_value, exc_tb)
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1057, in 
_do_get
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters return self._create_connection()
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 323, in 
_create_connection
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters return _ConnectionRecord(self)
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 449, in 
__init__
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters self.connection = self.__connect()
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 607, in 
__connect
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters connection = 
self.__pool._invoke_creator(self)
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 
97, in connect
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters return dialect.connect(*cargs, **cparams)
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
385, in connect
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters return self.dbapi.connect(*cargs, **cparams)
  2016-10-06 13:19:15.280 | 2016-10-06 13:19:15.277 21612 ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"

[Yahoo-eng-team] [Bug 1630448] Re: postgres newton post upgrade failure DBAPIError exception wrapped from (psycopg2.ProgrammingError) column build_requests.instance_uuid does not exist

2016-10-05 Thread Roman Podoliaka
Matthew, are you sure you also executed:

nova-manage api_db sync

?

These failures look like api db was not populated properly.

** Changed in: nova
   Status: Invalid => Incomplete

** Changed in: nova
 Assignee: (unassigned) => Matthew Thode (prometheanfire)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630448

Title:
  postgres newton post upgrade failure DBAPIError exception wrapped from
  (psycopg2.ProgrammingError) column build_requests.instance_uuid does
  not exist

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This could be related to https://bugs.launchpad.net/nova/+bug/1630446
  but I am reporting it here because it might not be :D

  Error was encountered after a m->n migration, db sync went fine.
  Error log is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589880] Re: report state failed

2016-09-28 Thread Roman Podoliaka
*** This bug is a duplicate of bug 1517926 ***
https://bugs.launchpad.net/bugs/1517926

kaka

I think it's duplicate of another problem (#1517926) that was fixed in:

rpodolyaka@rpodolyaka-pc:~/src/nova$ git tag --contains 
e0647dd4b2ae9f5f6f908102d2ac447440622785
12.0.1
12.0.2
12.0.3
12.0.4
rpodolyaka@rpodolyaka-pc:~/src/nova$ git show 
e0647dd4b2ae9f5f6f908102d2ac447440622785
commit e0647dd4b2ae9f5f6f908102d2ac447440622785
Author: Roman Podoliaka 
Date:   Thu Nov 19 16:00:01 2015 +0200

servicegroup: stop zombie service due to exception

If an exception is raised out of the _report_state call, we find that
the service no longer reports any updates to the database, so the
service is considered dead, thus creating a kind of zombie service.

I55417a5b91282c69432bb2ab64441c5cea474d31 seems to introduce a
regression, which leads to nova-* services marked as 'down', if an
error happens in a remote nova-conductor while processing a state
report: only Timeout errors are currently handled, but other errors
are possible, e.g. a DBError (wrapped with RemoteError on RPC
client side), if a DB temporarily goes away. This unhandled exception
will effectively break the state reporting thread - service will be
up again only after restart.

While the intention of I55417a5b91282c69432bb2ab64441c5cea474d31 was
to avoid cathing all the possible exceptions, but it looks like we must
do that to avoid creating a zombie.
The other part of that change was to ensure that during upgrade, we do
not spam the log server about MessagingTimeouts while the
nova-conductors are being restarted. This change ensures that still
happens.

Closes-Bug: #1517926

Change-Id: I44f118f82fbb811b790222face4c74d79795fe21
(cherry picked from commit 49b0d1741c674714fabf24d8409810064b953202)


and you seem to be using version 12.0.0.

Please try to update to the latest stable/liberty version of code and
re-open the bug if it's still reproduced (I did not manage to reproduce
it locally, presumably due to the fact that I use the version with the
commit above ^ applied).

** No longer affects: oslo.service

** This bug has been marked a duplicate of bug 1517926
   Nova services stop to report state via remote conductor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589880

Title:
  report state failed

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Description:
  =
  set master database read_only=on when switching master nova database to 
slave,after that,I check nova service status
  # nova-manage service list
  Binary   Host  Zone Status
 State Updated_At
  nova-consoleauth 11_120internal enabled   
 XXX   2016-06-07 08:28:46
  nova-conductor   11_120internal enabled   
 XXX   2016-06-07 08:28:45
  nova-cert11_120internal enabled   
 XXX   2016-05-17 08:12:10
  nova-scheduler   11_120internal enabled   
 XXX   2016-05-17 08:12:24
  nova-compute 11_121bx   enabled   
 XXX   2016-06-07 08:28:49
  nova-compute 11_122bx   enabled   
 XXX   2016-06-07 08:28:42
  =

  Steps to reproduce
  =
  # mysql
  MariaDB [nova]> set global read_only=on;
  =

  Environment
  
  Version:Liberty
  openstack-nova-conductor-12.0.0-1.el7.noarch

  Logs
  

  2016-05-12 11:01:20.343 9198 ERROR oslo.service.loopingcall 
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall [-] Fixed 
interval looping call 'nova.servicegroup.drivers.db.DbDriver._report_state' 
failed
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 113, in 
_run_loop
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 87, in 
_report_state
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
service.service_ref.save()
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return 
fn(self, *args, **kwargs)
  2016-05-12 11:01:20.473 9178 ERROR oslo.ser

[Yahoo-eng-team] [Bug 1620989] [NEW] When booting a VM with creation of a new volume, the host AZ info is not passed to Cinder

2016-09-07 Thread Roman Podoliaka
Public bug reported:

Description
===

When attaching of volumes across Nova/Cinder AZs is forbidden
(cross_az_attach = False in nova.conf) and you try to boot an instance
without specifying an AZ (i.e. you are ok with any of the AZs the
instance will be scheduled to), and block device mapping states that a
new volume must be created (e.g. in order to boot from it), then the
info about the AZ won't be passed to Cinder and it will create the new
volume in the default AZ.


Steps to reproduce
==

1. Configure multiple AZs in Nova and Cinder.
2. Disable attaching of volumes across AZs in nova.conf:

[cinder]
cross_az_attach = False

3. Restart nova-compute service.
4. Boot a new VM *without* specifying an AZ explicitly (so that Nova can pick 
up a host in *any* of the AZs) + state in the block device mapping, that a new 
volume must be created, e.g.:

nova boot --block-device source=image,id=decd5d33-fdd5-4736-b10a-
fd2ceebbd224,dest=volume,size=1,shutdown=remove,bootindex=0 --nic net-
id=68038c06-f160-4405-9acc-b3480e3e8830 --flavor m1.nano demo

Expected result
===

Instance is booted successfully.

Actual result
=

Instance failed to boot and goes to ERROR state (Block Device Mapping is
Invalid)

nova-compute log says:

2016-09-07 09:54:26.396 13021 ERROR nova.compute.manager [instance:
1c7de927-9755-4081-99cf-3d1132a9d45a] InvalidVolume: Invalid volume:
Instance 10 and volume e3970ecc-796a-46f8-952f-1bc804aab4a4 are not in
the same availability_zone. Instance is in az1. Volume is in az2

^ this is because *null* value was passed to Cinder on creation of a new volume 
and cinder-scheduler picked the cinder-volume in the *default* AZ configured in 
cinder.conf, instead of using the AZ of the host a Nova instance was scheduled 
to. 
 

Environment
===

DevStack, libvirt, Cinder LVM
Nova version: master (f1b70d9457ae6c1fba3e7ac7c5f8b08d9042f2ba)
Two AZs configured in Nova and Cinder

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: cinder volumes

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Summary changed:

- When a booting a VM with creation of a new volume, the host AZ info is not 
passed to Cinder
+ When booting a VM with creation of a new volume, the host AZ info is not 
passed to Cinder

** Tags added: cinder volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620989

Title:
  When booting a VM with creation of a new volume, the host AZ info is
  not passed to Cinder

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  When attaching of volumes across Nova/Cinder AZs is forbidden
  (cross_az_attach = False in nova.conf) and you try to boot an instance
  without specifying an AZ (i.e. you are ok with any of the AZs the
  instance will be scheduled to), and block device mapping states that a
  new volume must be created (e.g. in order to boot from it), then the
  info about the AZ won't be passed to Cinder and it will create the new
  volume in the default AZ.

  
  Steps to reproduce
  ==

  1. Configure multiple AZs in Nova and Cinder.
  2. Disable attaching of volumes across AZs in nova.conf:

  [cinder]
  cross_az_attach = False

  3. Restart nova-compute service.
  4. Boot a new VM *without* specifying an AZ explicitly (so that Nova can pick 
up a host in *any* of the AZs) + state in the block device mapping, that a new 
volume must be created, e.g.:

  nova boot --block-device source=image,id=decd5d33-fdd5-4736-b10a-
  fd2ceebbd224,dest=volume,size=1,shutdown=remove,bootindex=0 --nic net-
  id=68038c06-f160-4405-9acc-b3480e3e8830 --flavor m1.nano demo

  Expected result
  ===

  Instance is booted successfully.

  Actual result
  =

  Instance failed to boot and goes to ERROR state (Block Device Mapping
  is Invalid)

  nova-compute log says:

  2016-09-07 09:54:26.396 13021 ERROR nova.compute.manager [instance:
  1c7de927-9755-4081-99cf-3d1132a9d45a] InvalidVolume: Invalid volume:
  Instance 10 and volume e3970ecc-796a-46f8-952f-1bc804aab4a4 are not in
  the same availability_zone. Instance is in az1. Volume is in az2

  ^ this is because *null* value was passed to Cinder on creation of a new 
volume and cinder-scheduler picked the cinder-volume in the *default* AZ 
configured in cinder.conf, instead of using the AZ of the host a Nova instance 
was scheduled to. 
   

  Environment
  ===

  DevStack, libvirt, Cinder LVM
  Nova version: master (f1b70d9457ae6c1fba3e7ac7c5f8b08d9042f2ba)
  Two AZs configured in Nova and Cinder

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : y

[Yahoo-eng-team] [Bug 1607461] [NEW] nova-compute hangs while executing a blocking call to librbd

2016-07-28 Thread Roman Podoliaka
Public bug reported:

While executing a call to librbd nova-compute may hang for a while and
eventually go down in nova service-list output.

strace'ing shows that a process is stuck on acquiring a mutex:

root@node-153:~# strace -p 16675
Process 16675 attached
futex(0x7fff084ce36c, FUTEX_WAIT_PRIVATE, 1, NULL

gdb allows to see the traceback:

http://paste.openstack.org/show/542534/

^ which basically means calls to librbd (C library) are not monkey-
patched and do not allow to switch the execution context to another
green thread in an eventlet-based process.

To avoid blocking of the whole nova-compute process on calls to librbd
we should wrap them with tpool.execute()
(http://eventlet.net/doc/threading.html#eventlet.tpool.execute)

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: ceph compute

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Tags added: ceph compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607461

Title:
  nova-compute hangs while executing a blocking call to librbd

Status in OpenStack Compute (nova):
  New

Bug description:
  While executing a call to librbd nova-compute may hang for a while and
  eventually go down in nova service-list output.

  strace'ing shows that a process is stuck on acquiring a mutex:

  root@node-153:~# strace -p 16675
  Process 16675 attached
  futex(0x7fff084ce36c, FUTEX_WAIT_PRIVATE, 1, NULL

  gdb allows to see the traceback:

  http://paste.openstack.org/show/542534/

  ^ which basically means calls to librbd (C library) are not monkey-
  patched and do not allow to switch the execution context to another
  green thread in an eventlet-based process.

  To avoid blocking of the whole nova-compute process on calls to librbd
  we should wrap them with tpool.execute()
  (http://eventlet.net/doc/threading.html#eventlet.tpool.execute)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606496] [NEW] Instance affinity filters do not work in a heterogeneous cloud with Ironic computes

2016-07-26 Thread Roman Podoliaka
Public bug reported:

Description
===

In a heterogeneous cloud with both libvirt and ironic compute nodes
instance affinity filters like DifferentHostFilter or SameHostFilter do
not filter hosts out when scheduling a subsequent instance.

Steps to reproduce
==

Make sure you have at least two libvirt compute nodes and one ironic
node.

Make sure DifferentHostFilter and SameHostFilter are configured as nova-
scheduler filters in nova.conf, filters scheduler is used.

1. Boot a libvirt instance A.
2. Check the host name of the compute node instance A is running on (nova show 
from an admin user).
3. Boot a libvirt instance B passing a different_host=$A.uuid hint for 
nova-scheduler.
4. Check the host name of the compute node instance B is running on (nova show 
from an admin user).

Expected result
===

Instances A and B are running on two different compute nodes.

Actual result
=

Instances A and B are running on the same compute node.

nova-scheduler logs shows that DifferentHost filter was run, but did not
filter out one of the hosts:  Filter DifferentHostFilter returned 2
host(s) get_filtered_objects

Environment
===

OpenStack Mitaka

2 libvirt compute nodes
1 ironic compute node
FiltersScheduler is used
DifferentHostFilter and SameHostFilter filters are enabled in nova.conf

Root cause analysis
===

Debugging shown that IronicHostManager is configured to be used by nova-
scheduler instead of the default host manager, when Ironic compute are
deployed in the same cloud together with libvirt compute nodes.

IronicHostManager overrides the _get_instance_info() method and
unconditionally returns an empty instance dict, even if this method is
called for non-ironic computes of the same cloud. DifferentHostFilter
and similar filters later use this info to find an intersection of a set
of instances running on a libvirt compute node (currently, always {})
and a set of instances uuids passed as a hint for nova-scheduler, thus
compute nodes are never filtered out and the hint is effectively
ignored.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: ironic scheduler

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Description changed:

  Description
  ===
  
  In a heterogeneous cloud with both libvirt and ironic compute nodes
  instance affinity filters like DifferentHostFilter or SameHostFilter do
  not filter hosts out when scheduling a subsequent instance.
- 
  
  Steps to reproduce
  ==
  
  Make sure you have at least two libvirt compute nodes and one ironic
  node.
  
  Make sure DifferentHostFilter and SameHostFilter are configured as nova-
  scheduler filters in nova.conf, filters scheduler is used.
  
  1. Boot a libvirt instance A.
  2. Check the host name of the compute node instance A is running on (nova 
show from an admin user).
  3. Boot a libvirt instance B passing a different_host=$A.uuid hint for 
nova-scheduler.
  4. Check the host name of the compute node instance B is running on (nova 
show from an admin user).
  
- 
  Expected result
  ===
  
  Instances A and B are running on two different compute nodes.
- 
  
  Actual result
  =
  
  Instances A and B are running on the same compute node.
  
  nova-scheduler logs shows that DifferentHost filter was run, but did not
  filter out one of the hosts:  Filter DifferentHostFilter returned 2
  host(s) get_filtered_objects
- 
  
  Environment
  ===
  
  OpenStack Mitaka
  
  2 libvirt compute nodes
  1 ironic compute node
  FiltersScheduler is used
  DifferentHostFilter and SameHostFilter filters are enabled in nova.conf
  
- 
  Root cause analysis
  ===
  
  Debugging shown that IronicHostManager is configured to be used by nova-
  scheduler instead of the default host manager, when Ironic compute are
  deployed in the same cloud together with libvirt compute nodes.
  
  IronicHostManager overrides the _get_instance_info() method and
  unconditionally returns an empty instance dict, even if this method is
  called for non-ironic computes of the same cloud. DifferentHostFilter
- and similar filters later use this info to find ab intersection of a set
+ and similar filters later use this info to find an intersection of a set
  of instances running on a libvirt compute node (currently, always {})
  and a set of instances uuids passed as a hint for nova-scheduler, thus
  compute nodes are never filtered out and the hint is effectively
  ignored.

** Tags added: ironic

** Tags added: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606496

Title:
  Instance affinity filters do not work in a heterogeneous cloud with
  Ironic computes

Status in OpenStack Compute (n

[Yahoo-eng-team] [Bug 1240043] Re: get_server_diagnostics must define a hypervisor-independent API

2016-07-12 Thread Roman Podoliaka
As Matt stated in
https://bugs.launchpad.net/nova/+bug/1240043/comments/11 this was fixed
in API v3, which we dropped, so we still need to fix this in API v2.1 by
the means of a new micro version.

CONFIRMED FOR: NEWTON

** Changed in: nova
   Status: Expired => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
Milestone: 2014.2 => next

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240043

Title:
  get_server_diagnostics must define a hypervisor-independent API

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  get_server_diagnostics currently returns an unrestricted dictionary, which is 
only lightly documented in a few places, e.g.:
  
http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

  That documentation shows explicit differences between libvirt and
  XenAPI.

  There are moves to test + enforce the return values, and suggestions
  that Ceilometer may be interested in consuming the output, therefore
  we need an API which is explicitly defined and not depend on
  hypervisor-specific behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240043] Re: get_server_diagnostics must define a hypervisor-independent API

2016-06-29 Thread Roman Podoliaka
I'll double check this on my devstack and see what we can do.

** Changed in: nova
 Assignee: Gary Kotton (garyk) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240043

Title:
  get_server_diagnostics must define a hypervisor-independent API

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  get_server_diagnostics currently returns an unrestricted dictionary, which is 
only lightly documented in a few places, e.g.:
  
http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

  That documentation shows explicit differences between libvirt and
  XenAPI.

  There are moves to test + enforce the return values, and suggestions
  that Ceilometer may be interested in consuming the output, therefore
  we need an API which is explicitly defined and not depend on
  hypervisor-specific behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568208] Re: TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru Meditation Report run

2016-04-13 Thread Roman Podoliaka
** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => In Progress

** Project changed: oslo.reports => oslo.config

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568208

Title:
  TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru
  Meditation Report run

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.config:
  In Progress

Bug description:
  I noticed a trace in a recent tempest job run [1] like the following.
  From what I can tell, somehow rootkey here is an OptGroup object
  instead of the expected str.

  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 180, in handle_signal
  res = cls(version, frame).run()
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 228, in run
  return super(GuruMeditation, self).run()
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in run
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in 
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
102, in __str__
  return self.view(self.generator())
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/header.py", 
line 36, in __call__
  return six.text_type(self.header) + "\n" + six.text_type(model)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/models/base.py", 
line 73, in __str__
  return self.attached_view(self_cpy)
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 153, in __call__
  return "\n".join(serialize(model, None, -1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 124, in serialize
  res.extend(serialize(root[key], key, indent + 1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 113, in serialize
  res.append((self.indent_str * indent) + rootkey)
  TypeError: cannot concatenate 'str' and 'OptGroup' objects
  Unable to run Guru Meditation Report!

  
  [1] 
http://logs.openstack.org/41/302341/5/check/gate-tempest-dsvm-full/f51065e/logs/screen-n-cpu.txt.gz?#_2016-04-08_21_08_08_994

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1568208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568208] Re: TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru Meditation Report run

2016-04-13 Thread Roman Podoliaka
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568208

Title:
  TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru
  Meditation Report run

Status in OpenStack Compute (nova):
  New
Status in oslo.reports:
  In Progress

Bug description:
  I noticed a trace in a recent tempest job run [1] like the following.
  From what I can tell, somehow rootkey here is an OptGroup object
  instead of the expected str.

  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 180, in handle_signal
  res = cls(version, frame).run()
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 228, in run
  return super(GuruMeditation, self).run()
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in run
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in 
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
102, in __str__
  return self.view(self.generator())
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/header.py", 
line 36, in __call__
  return six.text_type(self.header) + "\n" + six.text_type(model)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/models/base.py", 
line 73, in __str__
  return self.attached_view(self_cpy)
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 153, in __call__
  return "\n".join(serialize(model, None, -1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 124, in serialize
  res.extend(serialize(root[key], key, indent + 1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 113, in serialize
  res.append((self.indent_str * indent) + rootkey)
  TypeError: cannot concatenate 'str' and 'OptGroup' objects
  Unable to run Guru Meditation Report!

  
  [1] 
http://logs.openstack.org/41/302341/5/check/gate-tempest-dsvm-full/f51065e/logs/screen-n-cpu.txt.gz?#_2016-04-08_21_08_08_994

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1568208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567336] [NEW] instance_info_cache_update() is not retried on deadlock

2016-04-07 Thread Roman Podoliaka
Public bug reported:

Description
=

When Galera is used in multi-writer mode it's possible that
instance_info_cache_update()  DB API method will be called for the very
same database row concurrently on two different MySQL servers. Due to
how Galera works internally, it will cause a deadlock exception for one
of the callers (see http://www.joinfu.com/2015/01/understanding-
reservations-concurrency-locking-in-nova/ for details).

instance_info_cache_update() is not currently retried on deadlock.
Should it happen an operation in question may fail, e.g. association of
a floating IP.


Steps to reproduce
===

1. Deploy Galera cluster in multi-writer mode.
2. Ensure there is at least two nova-conductor using two different MySQL 
servers in the Galera cluster.
3. Create an instance.
4. Associate / disassociate floating IPs concurrently (e.g. via Rally)


Expected result
=

All associate / disassociate operations succeed.


Actual result
==

One or more operations fail with an exception in python-novaclient:

  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 662, 
in remove_floating_ip
self._action('removeFloatingIp', server, {'address': address})
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1279, 
in _action
return self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 449, in 
post
return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 424, in 
_cs_request
resp, body = self._time_request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in 
_time_request
resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in 
request
raise exceptions.from_response(resp, body, url, method)
ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-ac412e1c-afcf-4ef3-accc-b5463805ca74)


Environment
==

OpenStack Liberty
Galera cluster (3 nodes) running in multiwriter mode

** Affects: nova
 Importance: Medium
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567336

Title:
  instance_info_cache_update() is not retried on deadlock

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  =

  When Galera is used in multi-writer mode it's possible that
  instance_info_cache_update()  DB API method will be called for the
  very same database row concurrently on two different MySQL servers.
  Due to how Galera works internally, it will cause a deadlock exception
  for one of the callers (see http://www.joinfu.com/2015/01
  /understanding-reservations-concurrency-locking-in-nova/ for details).

  instance_info_cache_update() is not currently retried on deadlock.
  Should it happen an operation in question may fail, e.g. association
  of a floating IP.

  
  Steps to reproduce
  ===

  1. Deploy Galera cluster in multi-writer mode.
  2. Ensure there is at least two nova-conductor using two different MySQL 
servers in the Galera cluster.
  3. Create an instance.
  4. Associate / disassociate floating IPs concurrently (e.g. via Rally)

  
  Expected result
  =

  All associate / disassociate operations succeed.

  
  Actual result
  ==

  One or more operations fail with an exception in python-novaclient:

File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 662, 
in remove_floating_ip
  self._action('removeFloatingIp', server, {'address': address})
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 
1279, in _action
  return self.api.client.post(url, body=body)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 449, in 
post
  return self._cs_request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 424, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 397, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 391, in 
request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if 

[Yahoo-eng-team] [Bug 1543625] [NEW] nova in mitaka reports osapi_compute and metadata services as down Edit

2016-02-09 Thread Roman Podoliaka
Public bug reported:

nova service-list now reports status of all services defined with
*_listen=$IP configs in nova.conf. These services are just APIs, and not
RPC services, so they shouldn't be present. Moreover, they shouldn't
report as down. The APIs are certainly fulfilling requests as usual.

root@node-4:~# nova service-list
+++---+--+-+---++-+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+++---+--+-+---++-+
| 1 | nova-consoleauth | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:22.00 | - |
| 2 | nova-scheduler | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:22.00 | - |
| 3 | nova-cert | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:23.00 | - |
| 4 | nova-conductor | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:23.00 | - |
| 5 | nova-osapi_compute | 192.168.0.3 | internal | enabled | down | - | - |
| 7 | nova-metadata | 0.0.0.0 | internal | enabled | down | - | - |
| 8 | nova-compute | node-6.domain.tld | nova | enabled | up | 
2016-01-28T14:08:29.00 | - |
+++---+--+-+---++-+

** Affects: nova
 Importance: Medium
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress

** Changed in: nova
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543625

Title:
  nova in mitaka reports osapi_compute and metadata services as down
  Edit

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  nova service-list now reports status of all services defined with
  *_listen=$IP configs in nova.conf. These services are just APIs, and
  not RPC services, so they shouldn't be present. Moreover, they
  shouldn't report as down. The APIs are certainly fulfilling requests
  as usual.

  root@node-4:~# nova service-list
  
+++---+--+-+---++-+
  | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
+++---+--+-+---++-+
  | 1 | nova-consoleauth | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:22.00 | - |
  | 2 | nova-scheduler | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:22.00 | - |
  | 3 | nova-cert | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:23.00 | - |
  | 4 | nova-conductor | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:23.00 | - |
  | 5 | nova-osapi_compute | 192.168.0.3 | internal | enabled | down | - | - |
  | 7 | nova-metadata | 0.0.0.0 | internal | enabled | down | - | - |
  | 8 | nova-compute | node-6.domain.tld | nova | enabled | up | 
2016-01-28T14:08:29.00 | - |
  
+++---+--+-+---++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532890] [NEW] No network connectivity inside VMs after host reboot

2016-01-11 Thread Roman Podoliaka
Public bug reported:

When a host is rebooted and resume_guests_state_on_host_boot is set to True, it 
may happen that nova-compute starts before Neutron L2 agent finishes its 
initialization, which includes creation of an OVS integration bridge, and thus 
we'll see the following error when creating an ovs vif port:

 2016-01-05 02:57:57.970 11932 ERROR nova.network.linux_net 
[req-e5b5ba37-fff7-4b41-9dda-00c2c54a8e22 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=120', '--', '--if-exists', 'del-port', 
u'qvob2efeceb-e4', '--', 'add-port', u'br-int', u'qvob2efeceb-e4', '--', 'set', 
'Interface', u'qvob2efeceb-e4', 
u'external-ids:iface-id=b2efeceb-e46f-41f5-a1b2-add7f5dd7a80', 
'external-ids:iface-status=active', 
u'external-ids:attached-mac=fa:16:3e:34:91:87', 
'external-ids:vm-uuid=50118e0e-e104-4cc0-b154-b0dcdaef0494']. Exception: 
Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 -- 
--if-exists del-port qvob2efeceb-e4 -- add-port br-int qvob2efeceb-e4 -- set 
Interface qvob2efeceb-e4 
external-ids:iface-id=b2efeceb-e46f-41f5-a1b2-add7f5dd7a80 
external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:34:91:87 
external-ids:vm-uuid=50118e0e-e104-4cc0-b154-b0dcdaef0494
Exit code: 1
Stdout: u''
Stderr: u'ovs-vsctl: no bridge named br-int\n'

An instance will be booted successfully, but there will be no network 
connectivity inside the VM.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532890

Title:
  No network connectivity inside VMs after host reboot

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When a host is rebooted and resume_guests_state_on_host_boot is set to True, 
it may happen that nova-compute starts before Neutron L2 agent finishes its 
initialization, which includes creation of an OVS integration bridge, and thus 
we'll see the following error when creating an ovs vif port:
  
   2016-01-05 02:57:57.970 11932 ERROR nova.network.linux_net 
[req-e5b5ba37-fff7-4b41-9dda-00c2c54a8e22 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=120', '--', '--if-exists', 'del-port', 
u'qvob2efeceb-e4', '--', 'add-port', u'br-int', u'qvob2efeceb-e4', '--', 'set', 
'Interface', u'qvob2efeceb-e4', 
u'external-ids:iface-id=b2efeceb-e46f-41f5-a1b2-add7f5dd7a80', 
'external-ids:iface-status=active', 
u'external-ids:attached-mac=fa:16:3e:34:91:87', 
'external-ids:vm-uuid=50118e0e-e104-4cc0-b154-b0dcdaef0494']. Exception: 
Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 
-- --if-exists del-port qvob2efeceb-e4 -- add-port br-int qvob2efeceb-e4 -- set 
Interface qvob2efeceb-e4 
external-ids:iface-id=b2efeceb-e46f-41f5-a1b2-add7f5dd7a80 
external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:34:91:87 
external-ids:vm-uuid=50118e0e-e104-4cc0-b154-b0dcdaef0494
  Exit code: 1
  Stdout: u''
  Stderr: u'ovs-vsctl: no bridge named br-int\n'
  
  An instance will be booted successfully, but there will be no network 
connectivity inside the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518431] Re: Glance failed to upload image to swift storage

2015-12-01 Thread Roman Podoliaka
** No longer affects: mos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1518431

Title:
  Glance failed to upload image to swift storage

Status in Glance:
  Confirmed

Bug description:
  When glance configured with swift backend, and swift API provides via
  RadosGW is unable to upload image.

  Command:
  glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
  Logs:
  http://paste.openstack.org/show/479621/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1518431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517926] [NEW] Nova services stop to report state via remote conductor

2015-11-19 Thread Roman Podoliaka
Public bug reported:

If _report_state() method
(https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L85-L111)
of ServiceGroup DB driver fails remotely in nova-conductor, it will
effectively break the service state reporting thread
(https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L54-L57)
- this nova service will be considered as 'down' until it's *restarted*.

An example of such remote failure in nova-conductor would be a temporary
DB issue, e.g. http://paste.openstack.org/show/479104/

This seems to be a regression introduced in
https://github.com/openstack/nova/commit/3bc171202163a3810fdc9bdb3bad600487625443

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Description changed:

  If _report_state() method
  
(https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L85-L111)
  of ServiceGroup DB driver fails remotely in nova-conductor, it will
  effectively break the service state reporting thread
  
(https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L54-L57)
- - this nova service will be considered as 'down' until it's restarted.
+ - this nova service will be considered as 'down' until it's *restarted*.
  
  An example of such remote failure in nova-conductor would be a temporary
  DB issue, e.g. http://paste.openstack.org/show/479104/
  
  This seems to be a regression introduced in
  
https://github.com/openstack/nova/commit/3bc171202163a3810fdc9bdb3bad600487625443

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1517926

Title:
  Nova services stop to report state via remote conductor

Status in OpenStack Compute (nova):
  New

Bug description:
  If _report_state() method
  
(https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L85-L111)
  of ServiceGroup DB driver fails remotely in nova-conductor, it will
  effectively break the service state reporting thread
  
(https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py#L54-L57)
  - this nova service will be considered as 'down' until it's
  *restarted*.

  An example of such remote failure in nova-conductor would be a
  temporary DB issue, e.g. http://paste.openstack.org/show/479104/

  This seems to be a regression introduced in
  
https://github.com/openstack/nova/commit/3bc171202163a3810fdc9bdb3bad600487625443

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1517926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496873] [NEW] nova-compute leaves an open file descriptor after failed check for direct IO support

2015-09-17 Thread Roman Podoliaka
Public bug reported:

On my Kilo environment I noticed that nova-compute has an open file
descriptor of a deleted file:

nova-compute 14204 nova   21w   REG  252,00
117440706 /var/lib/nova/instances/.directio.test (deleted)

According to logs the check if FS supports direct IO failed:

2015-09-15 22:11:33.171 14204 DEBUG nova.virt.libvirt.driver [req-
f11861ed-bcd8-46cb-8d0b-b7736cce7f80 59d099e0cc1c44e991a02a68dbbb1815
5e6f6da2b2d74a108ccdead3b30f0bcf - - -] Path '/var/lib/nova/instances'
does not support direct I/O: '[Errno 22] Invalid argument'
_supports_direct_io /usr/lib/python2.7/dist-
packages/nova/virt/libvirt/driver.py:2588

Looks like nova-compute doesn't clean up the file descriptors properly,
which means the file will persist, until nova-compute is stopped.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Tags added: libvirt

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496873

Title:
  nova-compute leaves an open file descriptor after failed check for
  direct IO support

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  On my Kilo environment I noticed that nova-compute has an open file
  descriptor of a deleted file:

  nova-compute 14204 nova   21w   REG  252,00
  117440706 /var/lib/nova/instances/.directio.test (deleted)

  According to logs the check if FS supports direct IO failed:

  2015-09-15 22:11:33.171 14204 DEBUG nova.virt.libvirt.driver [req-
  f11861ed-bcd8-46cb-8d0b-b7736cce7f80 59d099e0cc1c44e991a02a68dbbb1815
  5e6f6da2b2d74a108ccdead3b30f0bcf - - -] Path '/var/lib/nova/instances'
  does not support direct I/O: '[Errno 22] Invalid argument'
  _supports_direct_io /usr/lib/python2.7/dist-
  packages/nova/virt/libvirt/driver.py:2588

  Looks like nova-compute doesn't clean up the file descriptors
  properly, which means the file will persist, until nova-compute is
  stopped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494207] [NEW] novncproxy options in [DEFAULT] group are confusing

2015-09-10 Thread Roman Podoliaka
Public bug reported:

Now nova-novncproxy config options reside in [DEFAULT] group, which is
very confusing given the fact how they are named, e.g.:

cfg.StrOpt('cert',
   default='self.pem',
   help='SSL certificate file'),
cfg.StrOpt('key',
   help='SSL key file (if separate from cert)'),

one would probably expect these options to set SSL key/cert for other
places in Nova as well (e.g. API), but those are used solely in novnc
instead.

We could probably give noVNC options their own group in the config and
use deprecate_name/deprecate_group for backwards compatibility with
existing config files.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494207

Title:
  novncproxy options in [DEFAULT] group are confusing

Status in OpenStack Compute (nova):
  New

Bug description:
  Now nova-novncproxy config options reside in [DEFAULT] group, which is
  very confusing given the fact how they are named, e.g.:

  cfg.StrOpt('cert',
 default='self.pem',
 help='SSL certificate file'),
  cfg.StrOpt('key',
 help='SSL key file (if separate from cert)'),

  one would probably expect these options to set SSL key/cert for other
  places in Nova as well (e.g. API), but those are used solely in novnc
  instead.

  We could probably give noVNC options their own group in the config and
  use deprecate_name/deprecate_group for backwards compatibility with
  existing config files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1494207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484578] [NEW] eventlet.tpool.execute() causes creation of unnecessary OS native threads when running unit tests

2015-08-13 Thread Roman Podoliaka
Public bug reported:

To cooperate with blocking calls, which can't be monkey-patched,  eventlet 
provides support for wrapping those into native OS threads by the means of 
eventlet.tpool module. E.g. nova-compute uses it extensively to make sure calls 
to libvirt does not block the whole process.

When used in unit tests, eventlet.tpool creates a pool of 20 native OS threads 
per test running process (assuming there was at least one unit test to actually 
execute this part of the code in this process).

In unit tests all blocking calls (like calls to libvirt) are monkey-patched 
anyway, so there is little sense to wrap those by the means of tpool.execute() 
(as we don't want to test eventlet either).

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484578

Title:
  eventlet.tpool.execute() causes creation of unnecessary OS native
  threads when running unit tests

Status in OpenStack Compute (nova):
  New

Bug description:
  To cooperate with blocking calls, which can't be monkey-patched,  eventlet 
provides support for wrapping those into native OS threads by the means of 
eventlet.tpool module. E.g. nova-compute uses it extensively to make sure calls 
to libvirt does not block the whole process.
  
  When used in unit tests, eventlet.tpool creates a pool of 20 native OS 
threads per test running process (assuming there was at least one unit test to 
actually execute this part of the code in this process).
  
  In unit tests all blocking calls (like calls to libvirt) are monkey-patched 
anyway, so there is little sense to wrap those by the means of tpool.execute() 
(as we don't want to test eventlet either).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483287] [NEW] test_models_sync() will be broken on upcoming Alembic versions

2015-08-10 Thread Roman Podoliaka
Public bug reported:

test_models_sync() is currently making assumptions, that won't be true
in the upcoming Alembic releases (0.7.7 and 0.8.0 respectively). Unless
we fix it now, it's going to break the gate when the releases of Alembic
are cut.

Mike Bayer's comment in the original patch:

https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

ML thread:

http://lists.openstack.org/pipermail/openstack-
dev/2015-August/071638.html

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483287

Title:
  test_models_sync() will be broken on upcoming Alembic versions

Status in OpenStack Compute (nova):
  New

Bug description:
  test_models_sync() is currently making assumptions, that won't be true
  in the upcoming Alembic releases (0.7.7 and 0.8.0 respectively).
  Unless we fix it now, it's going to break the gate when the releases
  of Alembic are cut.

  Mike Bayer's comment in the original patch:

  
https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

  ML thread:

  http://lists.openstack.org/pipermail/openstack-
  dev/2015-August/071638.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469029] Re: Migrations fail going from juno -> kilo

2015-07-21 Thread Roman Podoliaka
The last Sam's comment means there is a problem with Keystone migration
scripts, or I should really say, they simply do not override the
horrible MySQL default for collation and new tables created are not in
utf8.

** Changed in: oslo.db
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469029

Title:
  Migrations fail going from juno -> kilo

Status in Keystone:
  In Progress
Status in oslo.db:
  Invalid

Bug description:
  Trying to upgrade from Juno -> Kilo

  keystone-manage db_version
  55

  keystone-manage db_sync
  2015-06-26 16:52:47.494 6169 CRITICAL keystone [-] ProgrammingError: 
(ProgrammingError) (1146, "Table 'keystone_k.identity_provider' doesn't exist") 
'ALTER TABLE identity_provider Engine=InnoDB' ()
  2015-06-26 16:52:47.494 6169 TRACE keystone Traceback (most recent call last):
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/bin/keystone-manage", line 10, in 
  2015-06-26 16:52:47.494 6169 TRACE keystone execfile(__file__)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/keystone/bin/keystone-manage", line 44, in 
  2015-06-26 16:52:47.494 6169 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/keystone/keystone/cli.py", line 585, in main
  2015-06-26 16:52:47.494 6169 TRACE keystone CONF.command.cmd_class.main()
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/keystone/keystone/cli.py", line 76, in main
  2015-06-26 16:52:47.494 6169 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/keystone/keystone/common/sql/migration_helpers.py", line 247, in 
sync_database_to_version
  2015-06-26 16:52:47.494 6169 TRACE keystone 
_sync_extension_repo(default_extension, version)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/keystone/keystone/common/sql/migration_helpers.py", line 232, in 
_sync_extension_repo
  2015-06-26 16:52:47.494 6169 TRACE keystone _fix_federation_tables(engine)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/keystone/keystone/common/sql/migration_helpers.py", line 167, in 
_fix_federation_tables
  2015-06-26 16:52:47.494 6169 TRACE keystone engine.execute("ALTER TABLE 
identity_provider Engine=InnoDB")
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1863, in execute
  2015-06-26 16:52:47.494 6169 TRACE keystone return 
connection.execute(statement, *multiparams, **params)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 
833, in execute
  2015-06-26 16:52:47.494 6169 TRACE keystone return 
self._execute_text(object, multiparams, params)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 
982, in _execute_text
  2015-06-26 16:52:47.494 6169 TRACE keystone statement, parameters
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1070, in _execute_context
  2015-06-26 16:52:47.494 6169 TRACE keystone context)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py",
 line 261, in _handle_dbapi_exception
  2015-06-26 16:52:47.494 6169 TRACE keystone e, statement, parameters, 
cursor, context)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1267, in _handle_dbapi_exception
  2015-06-26 16:52:47.494 6169 TRACE keystone 
util.raise_from_cause(newraise, exc_info)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
  2015-06-26 16:52:47.494 6169 TRACE keystone reraise(type(exception), 
exception, tb=exc_tb)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1063, in _execute_context
  2015-06-26 16:52:47.494 6169 TRACE keystone context)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", 
line 442, in do_execute
  2015-06-26 16:52:47.494 6169 TRACE keystone cursor.execute(statement, 
parameters)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 205, in 
execute
  2015-06-26 16:52:47.494 6169 TRACE keystone self.errorhandler(self, exc, 
value)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
"/opt/kilo/local/lib/python2.7/sit

[Yahoo-eng-team] [Bug 1471271] [NEW] Volume detach leaves volume attached to instance on start/rebuild/reboot

2015-07-03 Thread Roman Podoliaka
Public bug reported:

When starting/restarting/rebuilding instances, it may happen that a
volume detach request comes right in the middle of attaching a volume in
the driver. In this case the hypervisor (e.g. libvirt) will throw
DiskNotFound exception in driver.detach_volume() call, but still  the
volume gets attached eventually, when the instance starts.

This leaves the instance in the state, when the volume is `de-facto`
attached to it (i.e. shown in the `virsh dumpxmp $instance` output for
libvirt), but both Nova and Cinder think the volume is actually *not*
in-use.

Steps to reproduce:

1. Create an instance and attach a volume to it.
2. Stop the instance.
3. Start the instance and send a couple of volume-detach requests in a row, 
like:

   nova start demo && nova volume-detach demo $volume_id || nova volume-detach 
demo $volume_id || nova volume-detach demo $volume_id
 || nova volume-detach demo $volume_id

4. Check the cinder list, nova show $inst, virsh dumpxml $inst output.

Expected result:

Both cinder list and nova show report volume is not in-use anymore.
There is no volume related elements in virsh dumpxml output.

Actual result:

Both cinder list and nova show report volume is not in-use anymore. But
virsh dumpxml shows that the volume is still attached to the instance.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Description changed:

- When starting/restarting/rebuilding instances, it may happen that a volume 
detach request comes right in the middle of attaching a volume in the driver. 
In this case the hypervisor (e.g. libvirt) will throw DiskNotFound exception in 
driver.detach_volume() call, but still  the volume gets attached eventually, 
when the instance starts.
- 
- This leaves the instance in the state, when the volume is `de-facto` attached 
to it (i.e. shown in the `virsh dumpxmp $instance` output for libvirt), but 
both Nova and Cinder think the volume is actually *not* in-use.
+ When starting/restarting/rebuilding instances, it may happen that a
+ volume detach request comes right in the middle of attaching a volume in
+ the driver. In this case the hypervisor (e.g. libvirt) will throw
+ DiskNotFound exception in driver.detach_volume() call, but still  the
+ volume gets attached eventually, when the instance starts.
+ 
+ This leaves the instance in the state, when the volume is `de-facto`
+ attached to it (i.e. shown in the `virsh dumpxmp $instance` output for
+ libvirt), but both Nova and Cinder think the volume is actually *not*
+ in-use.
+ 
+ Steps to reproduce:
+ 
+ 1. Create an instance and attach a volume to it.
+ 2. Stop the instance.
+ 3. Start the instance and send a couple of volume-detach requests in a row, 
like:
+ 
+nova start demo && nova volume-detach demo $volume_id || nova 
volume-detach demo $volume_id || nova volume-detach demo $volume_id
+  || nova volume-detach demo $volume_id
+ 
+ 4. Check the cinder list, nova show $inst, virsh dumpxml $inst output.
+ 
+ Expected result:
+ 
+ Both cinder list and nova show report volume is not in-use anymore.
+ There is no volume related elements in virsh dumpxml output.
+ 
+ Actual result:
+ 
+ Both cinder list and nova show report volume is not in-use anymore. But
+ virsh dumpxml shows that the volume is still attached to the instance.

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471271

Title:
  Volume detach leaves volume attached to instance on
  start/rebuild/reboot

Status in OpenStack Compute (Nova):
  New

Bug description:
  When starting/restarting/rebuilding instances, it may happen that a
  volume detach request comes right in the middle of attaching a volume
  in the driver. In this case the hypervisor (e.g. libvirt) will throw
  DiskNotFound exception in driver.detach_volume() call, but still  the
  volume gets attached eventually, when the instance starts.

  This leaves the instance in the state, when the volume is `de-facto`
  attached to it (i.e. shown in the `virsh dumpxmp $instance` output for
  libvirt), but both Nova and Cinder think the volume is actually *not*
  in-use.

  Steps to reproduce:

  1. Create an instance and attach a volume to it.
  2. Stop the instance.
  3. Start the instance and send a couple of volume-detach requests in a row, 
like:

 nova start demo && nova volume-detach demo $volume_id || nova 
volume-detach demo $volume_id || nova volume-detach demo $volume_id
   || nova volume-detach demo $volume_id

  4. Check the cinder list, nova show $inst, virsh dumpxml $inst output.

  Expected result:

  Both cinder list and nova show report volume is not in-use anymore.
  There is no volume related elements in virsh dumpxml output.

  Actual result:

  Both cinder list 

[Yahoo-eng-team] [Bug 1471216] [NEW] Rebuild detaches block devices when instance is still powered on

2015-07-03 Thread Roman Podoliaka
Public bug reported:

Due to the fact that rebuild detaches block devices when instance is
still powered on, data written to attached volumes can possibly be lost,
if it hasn't been fsynced yet.

We can prevent this by allowing instance to shut down gracefully before
detaching block devices during rebuild.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471216

Title:
  Rebuild detaches block devices when instance is still powered on

Status in OpenStack Compute (Nova):
  New

Bug description:
  Due to the fact that rebuild detaches block devices when instance is
  still powered on, data written to attached volumes can possibly be
  lost, if it hasn't been fsynced yet.

  We can prevent this by allowing instance to shut down gracefully
  before detaching block devices during rebuild.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444581] [NEW] Rebuild of a volume-backed instance fails

2015-04-15 Thread Roman Podoliaka
Public bug reported:

If you try to rebuild a volume-backed instance, it fails.

malor@ubuntu:~/devstack$ nova image-list
+--+-+++
| ID   | Name| 
Status | Server |
+--+-+++
| 889d4783-de7f-4277-a2ff-46e6542a7c54 | cirros-0.3.2-x86_64-uec | 
ACTIVE ||
| 867aa81c-ddc3-45e3-9067-b70166c9b2e3 | cirros-0.3.2-x86_64-uec-kernel  | 
ACTIVE ||
| b8db9175-1368-4b45-a914-3ba5edcc044a | cirros-0.3.2-x86_64-uec-ramdisk | 
ACTIVE ||
| b92c34bb-91ee-426f-b945-8e341a0c8bdb | testvm  | 
ACTIVE ||
+--+-+++
malor@ubuntu:~/devstack$ neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| 243fbe0a-3be7-453b-9884-7df837461769 | private | 
dde8f0cc-6db0-4d6e-8e72-470535791055 10.0.0.0/24 |
| f4e4436c-27e5-4411-83d2-611f8d9af45c | public  | 
d753749d-0e14-4927-b9a3-cfccd6d21e09 |
+--+-+--+

Steps to reproduce:

1) build a volume-backed instance

nova boot --flavor m1.nano --nic net-id=243fbe0a-
3be7-453b-9884-7df837461769 --block-device
source=image,id=889d4783-de7f-4277-a2ff-
46e6542a7c54,dest=volume,size=1,shutdown=preserve,bootindex=0  demo

2) rebuild it with a new image

nova rebuild demo b92c34bb-91ee-426f-b945-8e341a0c8bdb


Expected result:

   instance is rebuilt using the new image and is in ACTIVE state

Actual result:

  instance is in ERROR state

  Traceback from nova-compute http://paste.openstack.org/show/204014/

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: volumes

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Summary changed:

- Rebuild of volume-backed instance fails
+ Rebuild of a volume-backed instance fails

** Tags added: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444581

Title:
  Rebuild of a volume-backed instance fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you try to rebuild a volume-backed instance, it fails.

  malor@ubuntu:~/devstack$ nova image-list
  
+--+-+++
  | ID   | Name| 
Status | Server |
  
+--+-+++
  | 889d4783-de7f-4277-a2ff-46e6542a7c54 | cirros-0.3.2-x86_64-uec | 
ACTIVE ||
  | 867aa81c-ddc3-45e3-9067-b70166c9b2e3 | cirros-0.3.2-x86_64-uec-kernel  | 
ACTIVE ||
  | b8db9175-1368-4b45-a914-3ba5edcc044a | cirros-0.3.2-x86_64-uec-ramdisk | 
ACTIVE ||
  | b92c34bb-91ee-426f-b945-8e341a0c8bdb | testvm  | 
ACTIVE ||
  
+--+-+++
  malor@ubuntu:~/devstack$ neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | 243fbe0a-3be7-453b-9884-7df837461769 | private | 
dde8f0cc-6db0-4d6e-8e72-470535791055 10.0.0.0/24 |
  | f4e4436c-27e5-4411-83d2-611f8d9af45c | public  | 
d753749d-0e14-4927-b9a3-cfccd6d21e09 |
  
+--+-+--+

  Steps to reproduce:

  1) build a volume-backed instance

  nova boot --flavor m1.nano --nic net-id=243fbe0a-
  3be7-453b-9884-7df837461769 --block-device
  source=image,id=889d4783-de7f-4277-a2ff-
  46e6542a7c54,dest=volume,size=1,shutdown=preserve,bootindex=0  demo

  2) rebuild it with a new image

  nova rebuild demo b92c34bb-91ee-426f-b945-8e341a0c8bdb

  
  Expected result:

 instance is rebuilt using the new image and is in ACTIVE state

  Actual result:

instance is in ERROR state

Traceback from nova-compute http://paste.openstack.org/show/204014/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-

[Yahoo-eng-team] [Bug 1440762] [NEW] Rebuild an instance with attached volume fails

2015-04-06 Thread Roman Podoliaka
Public bug reported:

When trying to rebuild an instance with attached volume, it fails with
the errors:

2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher libvirtError: 
Failed to terminate process 22913 with SIGKILL: Device or resource busy
2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
<180>Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
<182>Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

The full log of rebuild process is here:
http://paste.openstack.org/show/166892/

** Affects: nova
 Importance: Undecided
     Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: volumes

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440762

Title:
  Rebuild an instance with attached volume fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to rebuild an instance with attached volume, it fails with
  the errors:

  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 
libvirtError: Failed to terminate process 22913 with SIGKILL: Device or 
resource busy
  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
  <180>Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
  <182>Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

  The full log of rebuild process is here:
  http://paste.openstack.org/show/166892/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438183] [NEW] Graceful shutdown of nova-compute service fails

2015-03-30 Thread Roman Podoliaka
Public bug reported:

nova-compute doesn't shutdown gracefully on SIGTERM, e.g. booting a VM
fails with:

09:29:18 AUDIT nova.compute.manager [req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 
None] [instance: 7ea3e761-6b85-49db-8dcc-79f6f2286
df8] Starting instance...
09:29:18 INFO nova.openstack.common.service [-] Caught SIGTERM, exiting
...
09:29:37 INFO nova.compute.manager [-] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VM Started (Lifecycle Event)
09:29:37 INFO nova.compute.manager [-] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VM Paused (Lifecycle Event)
...
09:34:37 WARNING nova.virt.libvirt.driver 
[req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 None] Timeout waiting for vif 
plugging callback for instance 7ea3e761-6b85-49db-8dcc-79f6f2286df8
09:34:37 INFO nova.compute.manager [-] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VM Stopped (Lifecycle Event)
09:34:38 INFO nova.virt.libvirt.driver 
[req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 None] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] Deleting instance files 
/var/lib/nova/instances/7ea3e761-6b85-49db-8dcc-79f6f2286df8
09:34:38 ERROR nova.compute.manager [req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 
None] [instance: 7ea3e761-6b85-49db-8dcc-79f6f2286df8] Instance failed to spawn
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] Traceback (most recent call last):
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1773, in _spawn
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] block_device_info)
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2299, in 
spawn
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] block_device_info)
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3745, in 
_create_domain_and_network
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] raise 
exception.VirtualInterfaceCreateException()
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VirtualInterfaceCreateException: Virtual 
Interface creation failed
09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438183

Title:
  Graceful shutdown of nova-compute service fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova-compute doesn't shutdown gracefully on SIGTERM, e.g. booting a VM
  fails with:

  09:29:18 AUDIT nova.compute.manager [req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 
None] [instance: 7ea3e761-6b85-49db-8dcc-79f6f2286
  df8] Starting instance...
  09:29:18 INFO nova.openstack.common.service [-] Caught SIGTERM, exiting
  ...
  09:29:37 INFO nova.compute.manager [-] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VM Started (Lifecycle Event)
  09:29:37 INFO nova.compute.manager [-] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VM Paused (Lifecycle Event)
  ...
  09:34:37 WARNING nova.virt.libvirt.driver 
[req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 None] Timeout waiting for vif 
plugging callback for instance 7ea3e761-6b85-49db-8dcc-79f6f2286df8
  09:34:37 INFO nova.compute.manager [-] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] VM Stopped (Lifecycle Event)
  09:34:38 INFO nova.virt.libvirt.driver 
[req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 None] [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] Deleting instance files 
/var/lib/nova/instances/7ea3e761-6b85-49db-8dcc-79f6f2286df8
  09:34:38 ERROR nova.compute.manager [req-9cdbba9c-af3b-4845-9deb-c68bffe63d75 
None] [instance: 7ea3e761-6b85-49db-8dcc-79f6f2286df8] Instance failed to spawn
  09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] Traceback (most recent call last):
  09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1773, in _spawn
  09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] block_device_info)
  09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2299, in 
spawn
  09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] block_device_info)
  09:34:38 TRACE nova.compute.manager [instance: 
7ea3e761-6b85-49db-8dcc-79f6f2286df8] File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",

[Yahoo-eng-team] [Bug 1429093] [NEW] nova allows to boot images with virtual size > root_gb specified in flavor

2015-03-06 Thread Roman Podoliaka
Public bug reported:

It's currently possible to boot an instance from a QCOW2 image, which
has the virtual size larger than root_gb size specified in the given
flavor.

Steps to reproduce:

1. Download a QCOW2 image (e.g. Cirros -
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

2. Resize the image to a reasonable size:

qemu-img resize cirros-0.3.0-i386-disk.img +9G

3. Upload the image to Glance:

glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-10GB
--is-public True --progress --container-format bare --disk-format qcow2

4. Boot the first VM using a 'correct' flavor (root_gb > virtual size of
the Cirros image), e.g. m1.small (root_gb = 20)

nova boot --image cirros-10GB --flavor m1.small demo-ok

5. Wait until the VM boots.

6. Boot the second VM using an 'incorrect' flavor (root_gb < virtual
size of the Cirros image), e.g. m1.tiny (root_gb = 1):

nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

7. Wait until the VM boots.

Expected result:

demo-ok is in ACTIVE state
demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

Actual result:

demo-ok is in ACTIVE state
demo-should-fail is in ACTIVE state

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429093

Title:
  nova allows to boot images with virtual size > root_gb specified in
  flavor

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  It's currently possible to boot an instance from a QCOW2 image, which
  has the virtual size larger than root_gb size specified in the given
  flavor.

  Steps to reproduce:

  1. Download a QCOW2 image (e.g. Cirros -
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

  2. Resize the image to a reasonable size:

  qemu-img resize cirros-0.3.0-i386-disk.img +9G

  3. Upload the image to Glance:

  glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-
  10GB --is-public True --progress --container-format bare --disk-format
  qcow2

  4. Boot the first VM using a 'correct' flavor (root_gb > virtual size
  of the Cirros image), e.g. m1.small (root_gb = 20)

  nova boot --image cirros-10GB --flavor m1.small demo-ok

  5. Wait until the VM boots.

  6. Boot the second VM using an 'incorrect' flavor (root_gb < virtual
  size of the Cirros image), e.g. m1.tiny (root_gb = 1):

  nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

  7. Wait until the VM boots.

  Expected result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

  Actual result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ACTIVE state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422315] [NEW] novncproxy fails to establish a VNC connection, if a reverse DNS look up times out

2015-02-16 Thread Roman Podoliaka
Public bug reported:

If DNS is configured on a node in a way, so that reverse DNS look ups
time out, noVNC will fail to connect to an instance with 'Connect
timeout' error.

The reverse DNS look up is done implicitly in BaseHTTPRequestHandler
(part of standard library), when logging a request. It's not
configurable, so the only way to disable it is to override the method
of the base class 
(https://github.com/python/cpython/blob/2.6/Lib/BaseHTTPServer.py#L487).

This is only true for the standard http server (used for novncproxy),
as the eventlet implementation (used for xvpvncproxy) seems to use
plain IP addresses without any reverse DNS look ups.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Summary changed:

- novncproxy fails to establish a VNC connection, if reverse DNS loop up times 
out
+ novncproxy fails to establish a VNC connection, if a reverse DNS look up 
times out

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422315

Title:
  novncproxy fails to establish a VNC connection, if a reverse DNS look
  up times out

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If DNS is configured on a node in a way, so that reverse DNS look ups
  time out, noVNC will fail to connect to an instance with 'Connect
  timeout' error.
  
  The reverse DNS look up is done implicitly in BaseHTTPRequestHandler
  (part of standard library), when logging a request. It's not
  configurable, so the only way to disable it is to override the method
  of the base class 
(https://github.com/python/cpython/blob/2.6/Lib/BaseHTTPServer.py#L487).
  
  This is only true for the standard http server (used for novncproxy),
  as the eventlet implementation (used for xvpvncproxy) seems to use
  plain IP addresses without any reverse DNS look ups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416361] [NEW] Nova doesn't allow to resize down an instance booted from a volume

2015-01-30 Thread Roman Podoliaka
Public bug reported:

If instance is booted from a volume, Nova won't allow to resize the
instance down, despite the fact root disk (ephemeral) is not even used:

http://paste.openstack.org/show/164108/

malor@ubuntu:~/devstack$ nova resize --poll demo m1.tiny

Server resizing... 100% complete
Finished

but in nova-compute.log:

2015-01-30 11:06:14.535 ERROR oslo.messaging.rpc.dispatcher 
[req-c788c8b2-d953-4f7f-962c-afca7ad41ff7 demo demo] Exception during message 
handling: Resize error: Unable to resize disk down.
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
137, in _dispatch_and_reply
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
180, in _dispatch
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
126, in _do_dispatch
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher payload)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 296, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher pass
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 281, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 346, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 269, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
migration.instance_uuid, exc_info=True)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 255, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 324, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 312, in decorated_function
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 3851, in resize_instance
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher 
self.instance_events.clear_events_for_instance(instance)
2015-01-30 11:06:14.535 TRACE oslo.messaging.rpc.dispatcher   Fil

[Yahoo-eng-team] [Bug 1410235] [NEW] TIMEOUT_SCALING_FACTOR is ignored in migration tests

2015-01-13 Thread Roman Podoliaka
Public bug reported:

After reusing oslo.db migrations test cases TIMEOUT_SCALING_FACTOR is
now ignored and general timeout value is used in migration test cases
(currently, OS_TEST_TIMEOUT=160 in .testr.conf), which may cause
sporadic test failures depending on the test node load.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410235

Title:
  TIMEOUT_SCALING_FACTOR is ignored in migration tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  After reusing oslo.db migrations test cases TIMEOUT_SCALING_FACTOR is
  now ignored and general timeout value is used in migration test cases
  (currently, OS_TEST_TIMEOUT=160 in .testr.conf), which may cause
  sporadic test failures depending on the test node load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397956] [NEW] Incorrect available free space when datastore_regex is used for vcenter

2014-12-01 Thread Roman Podoliaka
Public bug reported:

When vCenter is used as hypervisor, datastore_regex option is ignored
when calculating free space available (which affects nova hypervisor-
stats/Horizon and scheduling of new instances).

datastore_regex value should be passed down the stack when the
datastores are selected.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Confirmed


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397956

Title:
  Incorrect available free space when datastore_regex is used for
  vcenter

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When vCenter is used as hypervisor, datastore_regex option is ignored
  when calculating free space available (which affects nova hypervisor-
  stats/Horizon and scheduling of new instances).

  datastore_regex value should be passed down the stack when the
  datastores are selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386525] Re: (DataError) invalid input syntax for type inet: "my.invalid.ip"

2014-11-24 Thread Roman Podoliaka
This fails directly in PostgreSQL when the SQL statement is executed.
There is nothing oslo.db can do for you except raising a correct
exception (DataError).

** Changed in: oslo.db
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1386525

Title:
  (DataError) invalid input syntax for type inet: "my.invalid.ip"

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  Invalid

Bug description:
  DataError: (DataError) invalid input syntax for type inet:
  "my.invalid.ip"

  
  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py",
 line 59, in _handle_dbapi_exception
  e, statement, parameters, cursor, context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1024, in _handle_dbapi_exception
  exc_info
File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
196, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
867, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
324, in do_execute
  cursor.execute(statement, parameters)
  DataError: (DataError) invalid input syntax for type inet: "my.invalid.ip"
  LINE 3: ...ERE fixed_ips.deleted = 0 AND fixed_ips.address = 'my.invali... 
   ^
   'SELECT fixed_ips.created_at AS fixed_ips_created_at, fixed_ips.updated_at 
AS fixed_ips_updated_at, fixed_ips.deleted_at AS fixed_ips_deleted_at, 
fixed_ips.deleted AS fixed_ips_deleted, fixed_ips.id AS fixed_ips_id, 
fixed_ips.address AS fixed_ips_address, fixed_ips.network_id AS 
fixed_ips_network_id, fixed_ips.virtual_interface_id AS 
fixed_ips_virtual_interface_id, fixed_ips.instance_uuid AS 
fixed_ips_instance_uuid, fixed_ips.allocated AS fixed_ips_allocated, 
fixed_ips.leased AS fixed_ips_leased, fixed_ips.reserved AS fixed_ips_reserved, 
fixed_ips.host AS fixed_ips_host \nFROM fixed_ips \nWHERE fixed_ips.deleted = 
%(deleted_1)s AND fixed_ips.address = %(address_1)s \n LIMIT %(param_1)s' 
{'param_1': 1, 'address_1': 'my.invalid.ip', 'deleted_1': 0}
  2014-10-27 17:44:32.094 25193 TRACE oslo.db.sqlalchemy.exc_filters 

  Seen in nova-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1386525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367218] Re: Broken mysql connection causes internal server error

2014-11-24 Thread Roman Podoliaka
*** This bug is a duplicate of bug 1374497 ***
https://bugs.launchpad.net/bugs/1374497

** This bug is no longer a duplicate of bug 1361378
   "MySQL server has gone away" again
** This bug has been marked a duplicate of bug 1374497
   change in oslo.db "ping" handling is causing issues in projects that are not 
using transactions

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1367218

Title:
  Broken mysql connection causes internal server error

Status in OpenStack Identity (Keystone):
  Confirmed

Bug description:
  When mysql connection is broken (mysql server is restarted or virtual
  IP is moved around in typical HA setup), then keystone doesn't notice
  that connection was closed on the other side and first request after
  this outage fails. Because other openstack services autenticate
  incoming requests with keystone, the "first after-outage" request
  fails no matter what service is contacted.

  I think the problem might be solved by catching DBConnectionError in
  sql backend and reconnecting to the mysql server before internal
  server error is returned to user. Alternatively it could be solved by
  adding heartbeat checks for mysql connection (which is probably more
  complex).

  Example of failed request and server side error:

  [tripleo@dell-per720xd-01 tripleo]$ keystone service-list
  Authorization Failed: An unexpected error prevented the server from 
fulfilling your request. (HTTP 500)

  Server log:
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 ERROR keystone.common.wsgi [-] (OperationalError) 
(2006, 'MySQL server has gone away') 'SELECT user.id AS user_id, user.name AS 
user_name, user.domain_id AS user_domain_id, user.password AS user_password, 
user.enabled AS user_enabled, user.extra AS user_extra, user.default_project_id 
AS user_default_project_id \nFROM user \nWHERE user.name = %s AND 
user.domain_id = %s' ('admin', 'default')
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi Traceback (most recent 
call last):
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 223, in __call__
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi result = 
method(context, **params)
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 100, in authenticate
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi context, auth)
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 287, in _authenticate_local
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi username, 
CONF.identity.default_domain_id)
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/core.py",
 line 182, in wrapper
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/core.py",
 line 193, in wrapper
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/core.py",
 line 580, in get_user_by_name
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi ref = 
driver.get_user_by_name(user_name, domain_id)
  Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/backends/sql.p

[Yahoo-eng-team] [Bug 1361378] Re: "MySQL server has gone away" again

2014-11-24 Thread Roman Podoliaka
*** This bug is a duplicate of bug 1374497 ***
https://bugs.launchpad.net/bugs/1374497

** This bug has been marked a duplicate of bug 1374497
   change in oslo.db "ping" handling is causing issues in projects that are not 
using transactions

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361378

Title:
  "MySQL server has gone away" again

Status in OpenStack Identity (Keystone):
  Invalid
Status in Oslo Database library:
  Incomplete

Bug description:
  This is a regression of an old issue, which I thought was resolved by
  the "SELECT 1;" hack, but perhaps recently reintroduced with oslo.db?

  [Mon Aug 25 14:30:54.403538 2014] [:error] [pid 25778:tid 139886259214080] 
25778 ERROR keystone.common.wsgi [-] (OperationalError) (2003, "Can't connect 
to MySQL server on '127.0.0.1' (111)") None None
  [Mon Aug 25 14:30:54.403562 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi Traceback (most recent call last):
  [Mon Aug 25 14:30:54.403570 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/common/wsgi.py", line 214, in __call__
  [Mon Aug 25 14:30:54.403575 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi result = method(context, **params)
  [Mon Aug 25 14:30:54.403581 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/token/controllers.py", line 99, in 
authenticate
  [Mon Aug 25 14:30:54.403589 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi context, auth)
  [Mon Aug 25 14:30:54.403594 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/token/controllers.py", line 308, in 
_authenticate_local
  [Mon Aug 25 14:30:54.403600 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi username, CONF.identity.default_domain_id)
  [Mon Aug 25 14:30:54.403606 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/core.py", line 182, in wrapper
  [Mon Aug 25 14:30:54.403612 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return f(self, *args, **kwargs)
  [Mon Aug 25 14:30:54.403618 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/core.py", line 193, in wrapper
  [Mon Aug 25 14:30:54.403624 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return f(self, *args, **kwargs)
  [Mon Aug 25 14:30:54.403630 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/core.py", line 579, in 
get_user_by_name
  [Mon Aug 25 14:30:54.403637 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi ref = driver.get_user_by_name(user_name, 
domain_id)
  [Mon Aug 25 14:30:54.403644 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/backends/sql.py", line 140, in 
get_user_by_name
  [Mon Aug 25 14:30:54.403650 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi user_ref = query.one()
  [Mon Aug 25 14:30:54.403656 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2310, in one
  [Mon Aug 25 14:30:54.403662 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi ret = list(self)
  [Mon Aug 25 14:30:54.403667 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__
  [Mon Aug 25 14:30:54.403673 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return self._execute_and_instances(context)
  [Mon Aug 25 14:30:54.403680 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2366, in 
_execute_and_instances
  [Mon Aug 25 14:30:54.403731 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi close_with_result=True)
  [Mon Aug 25 14:30:54.403740 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2357, in 
_connection_from_session
  [Mon Aug 25 14:30:54.403746 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi **kw)
  [Mon Aug 25 14:30:54.403752 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.

[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-11-19 Thread Roman Podoliaka
** Changed in: mos/4.1.x
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 4.1.x series:
  Won't Fix
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-10-02 Thread Roman Podoliaka
** Also affects: mos/4.1.x
   Importance: Undecided
   Status: New

** Changed in: fuel/4.1.x
   Status: In Progress => Triaged

** Changed in: mos/4.1.x
   Status: New => Triaged

** Changed in: mos/4.1.x
   Importance: Undecided => High

** Changed in: mos/4.1.x
 Assignee: (unassigned) => MOS Nova (mos-nova)

** Changed in: mos/4.1.x
Milestone: None => 4.1.2

** No longer affects: fuel

** No longer affects: fuel/4.1.x

** No longer affects: fuel/5.0.x

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 4.1.x series:
  Triaged
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370782] [NEW] SecurityGroupExists error when booting multiple instances concurrently

2014-09-17 Thread Roman Podoliaka
Public bug reported:

If the default security group doesn't exist for some particular tenant,
booting of a few instances concurrently may lead to SecurityGroupExists
error as one thread will win the race and create the security group, and
others will fail.

This is easily reproduced by running Rally jobs in the gate.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: db

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => In Progress

** Description changed:

  If the default security group doesn't exist for some particular tenant,
  booting of a few instances concurrently may lead to SecurityGroupExists
  error as one thread will win the race and create the security group, and
  others will fail.
+ 
+ This is easily reproduced by Rally jobs in the gate.

** Description changed:

  If the default security group doesn't exist for some particular tenant,
  booting of a few instances concurrently may lead to SecurityGroupExists
  error as one thread will win the race and create the security group, and
  others will fail.
  
- This is easily reproduced by Rally jobs in the gate.
+ This is easily reproduced by running Rally jobs in the gate.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370782

Title:
  SecurityGroupExists error when booting multiple instances concurrently

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If the default security group doesn't exist for some particular
  tenant, booting of a few instances concurrently may lead to
  SecurityGroupExists error as one thread will win the race and create
  the security group, and others will fail.

  This is easily reproduced by running Rally jobs in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-09-10 Thread Roman Podoliaka
** No longer affects: mos/6.0.x

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367344] [NEW] Libvirt Watchdog support is broken when ComputeCapabilitiesFilter is used

2014-09-09 Thread Roman Podoliaka
Public bug reported:

The doc (http://docs.openstack.org/admin-guide-cloud/content/customize-
flavors.html , section "Watchdog behavior") suggests to use the flavor
extra specs property called "hw_watchdog_action" to configure a watchdog
device for libvirt guests. Unfortunately, this is broken due to
ComputeCapabilitiesFilter trying to use this property to filter compute
hosts, so that scheduling of a new instance always fails with
NoValidHostFound error.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367344

Title:
  Libvirt Watchdog support is broken when ComputeCapabilitiesFilter is
  used

Status in OpenStack Compute (Nova):
  New

Bug description:
  The doc (http://docs.openstack.org/admin-guide-cloud/content
  /customize-flavors.html , section "Watchdog behavior") suggests to use
  the flavor extra specs property called "hw_watchdog_action" to
  configure a watchdog device for libvirt guests. Unfortunately, this is
  broken due to ComputeCapabilitiesFilter trying to use this property to
  filter compute hosts, so that scheduling of a new instance always
  fails with NoValidHostFound error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283987] Re: Query Deadlock when creating >200 servers at once in sqlalchemy

2014-09-03 Thread Roman Podoliaka
** Changed in: oslo.db
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1283987

Title:
  Query Deadlock when creating >200 servers at once in sqlalchemy

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo Database library:
  Fix Released

Bug description:
  Query Deadlock when creating >200 servers at once in sqlalchemy.

  

  This bug occurred when I test this bug: 
  https://bugs.launchpad.net/nova/+bug/1270725

  The original info is logged here:
  http://paste.openstack.org/show/61534/

  --

  After checking the error-log, we can notice that the deadlock function
  is 'all()' in sqlalchemy framework.

  
  Previously, we use '@retry_on_dead_lock' function to retry requests when 
deadlock occurs.

  But it's only available for session deadlock(query/flush/execute). It
  doesn't cover some 'Query' actions in sqlalchemy.

  
  So, we need to add the same protction for 'all()' in sqlalchemy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1283987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364986] [NEW] oslo.db now wraps all DB exceptions

2014-09-03 Thread Roman Podoliaka
Public bug reported:

tl;dr

In a few versions of oslo.db (maybe when we release 1.0.0?), every
project using oslo.db should inspect their code and remove usages of
'raw' DB exceptions like IntegrityError/OperationalError/etc from except
clauses and replace them with the corresponding custom exceptions from
oslo.db (at least a base one - DBError).

Full version

A recent commit to oslo.db changed the way the 'raw' DB exceptions are
wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we
used decorators on Session methods and wrapped those exceptions with
oslo.db custom ones. This is mostly useful for handling them later (e.g.
to retry DB API methods on deadlocks).

The problem with Session decorators was that it wasn't possible to catch
and wrap all possible exceptions. E.g. SA Core exceptions and exceptions
raised in Query.all() calls were ignored. Now we are using a low level
SQLAlchemy event to catch all possible DB exceptions. This means that if
consuming projects had workarounds for those cases and expected 'raw'
exceptions instead of oslo.db ones, they would be broken. That's why we
*temporarily* added both 'raw' exceptions and new ones to expect clauses
in consuming projects code when they were ported to using of oslo.db to
make the transition smooth and allow them to work with different oslo.db
versions.

On the positive side, we now have a solution for problems like
https://bugs.launchpad.net/nova/+bug/1283987 when exceptions in Query
methods calls weren't handled properly.

In a few releases of oslo.db we can safely remove 'raw' DB exceptions
like IntegrityError/OperationalError/etc from projects code and except
only oslo.db specific ones like
DBDuplicateError/DBReferenceError/DBDeadLockError/etc (at least, we wrap
all the DB exceptions with our base exception DBError, if we haven't
found a better match).

oslo.db exceptions and their description:
https://github.com/openstack/oslo.db/blob/master/oslo/db/exception.py

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: db

** Description changed:

  tl;dr
  
- In a few versions of oslo.db (maybe when we release 1.0.0), every
+ In a few versions of oslo.db (maybe when we release 1.0.0?), every
  project using oslo.db should inspect their code and remove usages of
  'raw' DB exceptions like IntegrityError/OperationalError/etc from except
  clauses and replace them with the corresponding custom exceptions from
  oslo.db (at least a base one - DBError).
  
- 
  Full version
  
- 
- A recent commit to oslo.db changed the way the 'raw' DB exceptions are 
wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we used 
decorators on Session methods and wrapped those exceptions with oslo.db custom 
ones. This is mostly useful for handling them later (e.g. to retry DB API 
methods on deadlocks).
+ A recent commit to oslo.db changed the way the 'raw' DB exceptions are
+ wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we
+ used decorators on Session methods and wrapped those exceptions with
+ oslo.db custom ones. This is mostly useful for handling them later (e.g.
+ to retry DB API methods on deadlocks).
  
  The problem with Session decorators was that it wasn't possible to catch
  and wrap all possible exceptions. E.g. SA Core exceptions and exceptions
  raised in Query.all() calls were ignored. Now we are using a low level
  SQLAlchemy event to catch all possible DB exceptions. This means that if
  consuming projects had workarounds for those cases and expected 'raw'
  exceptions instead of oslo.db ones, they would be broken. That's why we
  *temporarily* added both 'raw' exceptions and new ones to expect clauses
  in consuming projects code when they were ported to using of oslo.db. On
  the positive side, we now have a solution for problems like
  https://bugs.launchpad.net/nova/+bug/1283987 when exceptions in Query
  methods calls weren't handled properly.
  
  In a few releases of oslo.db we can safely remove 'raw' DB exceptions
  like IntegrityError/OperationalError/etc from projects code and except
  only oslo.db specific ones like
  DBDuplicateError/DBReferenceError/DBDeadLockError/etc (at least, we wrap
  all the DB exceptions with our base exception DBError, if we haven't
  found a better match).
  
  oslo.db exceptions and their description:
  https://github.com/openstack/oslo.db/blob/master/oslo/db/exception.py

** Tags added: db

** Description changed:

  tl;dr
  
  In a few versions of oslo.db (maybe when we release 1.0.0?), every
  project using oslo.db should inspect their code and remove usages of
  'raw' DB exceptions like IntegrityError/OperationalError/etc from except
  clauses and replace them with the corresponding custom exceptions from
  oslo.db (at least a base one - DBError).
  
  Full version
  
  A recent commit to oslo.db changed the way the 'raw' DB exceptions are
  wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we
  used decorators on Session

[Yahoo-eng-team] [Bug 1363014] [NEW] NoopQuotasDriver.get_settable_quotas() method always fail with KeyError

2014-08-29 Thread Roman Podoliaka
Public bug reported:

NoopQuotasDriver.get_settable_quotas() tries to call update() on non-
existing dictionary entry. While NoopQuotasDriver is not really useful,
we still want it to be working.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363014

Title:
  NoopQuotasDriver.get_settable_quotas() method always fail with
  KeyError

Status in OpenStack Compute (Nova):
  New

Bug description:
  NoopQuotasDriver.get_settable_quotas() tries to call update() on non-
  existing dictionary entry. While NoopQuotasDriver is not really
  useful, we still want it to be working.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362233] [NEW] instance_create() DB API method implicitly creates additional DB transactions

2014-08-27 Thread Roman Podoliaka
Public bug reported:

In DB API code we have a notion of 'public' and 'private' methods. The
former are conceptually executed within a *single* DB transaction and
the latter can either create a new transaction or participate in the
existing one. The whole point is to be able to roll back the results of
DB API methods easily and be able to retry method calls on connection
failures. We had a bp (https://blueprints.launchpad.net/nova/+spec/db-
session-cleanup) in which all DB API have been re-factored to maintain
these properties.

instance_create() is one of the methods that currently violates the
rules of 'public' DB API methods and creates a concurrent transaction
implicitly.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: db

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362233

Title:
  instance_create() DB API method implicitly creates additional DB
  transactions

Status in OpenStack Compute (Nova):
  New

Bug description:
  In DB API code we have a notion of 'public' and 'private' methods. The
  former are conceptually executed within a *single* DB transaction and
  the latter can either create a new transaction or participate in the
  existing one. The whole point is to be able to roll back the results
  of DB API methods easily and be able to retry method calls on
  connection failures. We had a bp
  (https://blueprints.launchpad.net/nova/+spec/db-session-cleanup) in
  which all DB API have been re-factored to maintain these properties.

  instance_create() is one of the methods that currently violates the
  rules of 'public' DB API methods and creates a concurrent transaction
  implicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362221] [NEW] VMs fail to start when Ceph is used as a backend for ephemeral drives

2014-08-27 Thread Roman Podoliaka
Public bug reported:

VMs' drives placement in Ceph option has been chosen
(libvirt.images_types == 'rbd').

When user creates a flavor and specifies:
   - root drive size >0
   - ephemeral drive size >0 (important)

and tries to boot a VM, he gets "no valid host was found" in the
scheduler log:

Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
 File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, in 
_build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/l
ib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
 "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
e)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
lf.tb)\n', u' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", 
line 1714, in _spawn\n block_device_info)\n', u' File "/usr/lib/py
thon2.6/site-packages/nova/virt/libvirt/driver.py", line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File "/usr/lib/python2.6/site-packages
/nova/virt/libvirt/driver.py", line 2648, in _create_image\n 
ephemeral_size=ephemeral_gb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/
libvirt/imagebackend.py", line 186, in cache\n *args, **kwargs)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py",
line 587, in create_image\n prepare_template(target=base, max_size=size, *args, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/opens
tack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', 
u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebac
kend.py", line 176, in fetch_func_sync\n fetch_func(target=target, *args, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvir
t/driver.py", line 2458, in _create_ephemeral\n disk.mkfs(os_type, fs_label, 
target, run_as_root=is_block_dev)\n', u' File "/usr/lib/python2.6/sit
e-packages/nova/virt/disk/api.py", line 117, in mkfs\n utils.mkfs(default_fs, 
target, fs_label, run_as_root=run_as_root)\n', u' File "/usr/lib/pyt
hon2.6/site-packages/nova/utils.py", line 856, in mkfs\n execute(*args, 
run_as_root=run_as_root)\n', u' File "/usr/lib/python2.6/site-packages/nov
a/utils.py", line 165, in execute\n return processutils.execute(*cmd, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/commo
n/processutils.py", line 193, in execute\n cmd=\' \'.join(cmd))\n', 
u"ProcessExecutionError: Unexpected error while running command.\nCommand: sudo
 nova-rootwrap /etc/nova/rootwrap.conf mkfs -t ext3 -F -L ephemeral0 
/var/lib/nova/instances/_base/ephemeral_1_default\nExit code: 1\nStdout: 
''\nStde
rr: 'mke2fs 1.41.12 (17-May-2010)\\nmkfs.ext3: No such file or directory while 
trying to determine filesystem size\\n'\n"]

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: ceph libvirt rbd

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362221

Title:
  VMs fail to start when Ceph is used as a backend for ephemeral drives

Status in OpenStack Compute (Nova):
  New

Bug description:
  VMs' drives placement in Ceph option has been chosen
  (libvirt.images_types == 'rbd').

  When user creates a flavor and specifies:
 - root drive size >0
 - ephemeral drive size >0 (important)

  and tries to boot a VM, he gets "no valid host was found" in the
  scheduler log:

  Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, 
in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/l
  ib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
   "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
  e)\n&#x

[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-08-27 Thread Roman Podoliaka
** Changed in: mos/5.0.x
   Status: Fix Committed => Fix Released

** Changed in: fuel/5.0.x
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 4.1.x series:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254246] Re: somehow getting duplicate openvswitch agents for the same host

2014-08-13 Thread Roman Podoliaka
No, this is fixed in trunk.

** Changed in: tripleo
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254246

Title:
  somehow getting duplicate openvswitch agents for the same host

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  While investigating spurious failures in our TripleO continous
  deployment, I had this problem:

  
+--++-+---++
  | id   | agent_type | host
| alive | admin_state_up |
  
+--++-+---++
  | 3a9c6aca-e91f-49c9-850a-67db219fdf58 | L3 agent   | 
overcloud-notcompute-wjo2jbvvd2sm   | :-)   | True   |
  | 3fb9f6cf-b545-4a34-a490-dda834973d1e | Open vSwitch agent | 
overcloud-novacompute0-ubrjpv4jz64a | xxx   | True   |
  | 855349b2-b0fc-4270-bb96-385b61aa5a6c | DHCP agent | 
overcloud-notcompute-wjo2jbvvd2sm   | :-)   | True   |
  | 8b8a4128-9716-42ee-b886-f053db166ce3 | Metadata agent | 
overcloud-notcompute-wjo2jbvvd2sm   | :-)   | True   |
  | c8297e0d-8575-47f0-ae65-499c1e0319b3 | Open vSwitch agent | 
overcloud-notcompute-wjo2jbvvd2sm   | :-)   | True   |
  | f746fc1d-9083-46f4-a922-739c5d332d7c | Open vSwitch agent | 
overcloud-novacompute0-ubrjpv4jz64a | xxx   | True   |
  
+--++-+---++

  Note that overcloud-novacompute0-ubrjpv4jz64a has _two_ Open vSwitch
  agents.

  This caused many 'vif_type=binding_failed' errors when booting nova
  instances.

  Deleting f746fc1d-9083-46f4-a922-739c5d332d7c resulted in the problem
  going away.

  Seems like there might be a race if the agent restarts quickly, thus
  not seeing its own agent record and sending a second RPC to create
  one. I think, I am not entirely sure how this works, that is just a
  hypothesis.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-07-10 Thread Roman Podoliaka
** Changed in: fuel/5.1.x
   Status: In Progress => Fix Committed

** Changed in: mos/5.1.x
   Status: In Progress => Fix Committed

** Changed in: mos/5.0.x
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Fuel for OpenStack 5.1.x series:
  Fix Committed
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in Mirantis OpenStack 6.0.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launch

[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-07-02 Thread Roman Podoliaka
** Also affects: fuel/5.0.x
   Importance: Undecided
   Status: New

** Also affects: fuel/5.1.x
   Importance: Critical
 Assignee: Roman Podoliaka (rpodolyaka)
   Status: Fix Committed

** Changed in: fuel/5.0.x
   Status: New => Fix Committed

** Changed in: fuel/5.1.x
   Status: Fix Committed => In Progress

** Changed in: fuel/5.0.x
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: fuel/5.0.x
   Importance: Undecided => Critical

** Changed in: fuel/5.1.x
   Importance: Critical => High

** Changed in: fuel/5.1.x
Milestone: 5.0.1 => 5.1

** Changed in: fuel/5.0.x
Milestone: None => 5.0.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Fix Committed
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack:
  In Progress
Status in Mirantis OpenStack 5.0.x series:
  Fix Committed
Status in Mirantis OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack 6.0.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/sit

[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-07-02 Thread Roman Podoliaka
** Also affects: fuel/5.1.x
   Importance: High
 Assignee: MOS Nova (mos-nova)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 4.1.x series:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Fix Committed
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack:
  In Progress
Status in Mirantis OpenStack 5.0.x series:
  Fix Committed
Status in Mirantis OpenStack 5.1.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-07-02 Thread Roman Podoliaka
** No longer affects: fuel

** No longer affects: fuel/5.0.x

** No longer affects: fuel/5.1.x

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-07-01 Thread Roman Podoliaka
** Also affects: fuel
   Importance: Undecided
   Status: New

** Changed in: fuel
   Status: New => In Progress

** Changed in: fuel
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  In Progress
Status in Mirantis OpenStack:
  In Progress
Status in Mirantis OpenStack 5.0.x series:
  In Progress
Status in Mirantis OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack 6.0.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-06-25 Thread Roman Podoliaka
Fuel team it's a red herring, see the bug I filed:
https://bugs.launchpad.net/mos/+bug/1334164

** No longer affects: mos/5.0.x

** No longer affects: mos/5.1.x

** No longer affects: mos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Triaged
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] [NEW] nova error migrating with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-06-25 Thread Roman Podoliaka
Public bug reported:

Seeing this in conductor logs when migrating a VM with a floating IP
assigned:

2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher migration)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

** Affects: mos
 Importance: High
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Triaged

** Affects: mos/5.0.x
 Importance: High
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Triaged

** Affects: mos/5.1.x
 Importance: High
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Triaged

** Affects: mos/6.0.x
 Importance: High
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Triaged

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Confirmed


** Tags: nova unified-objects

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Also affects: mos
   Importance: Undecided
   Status: New

** Tags added: nova

** Changed in: mos
   Importance: Undecided => High

** Changed in: mos
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => Incomplete

** Changed in: nova
   Status: Incomplete => Confirmed

** Changed in: mos
   Status: New => Triaged

** Also affects: mos/5.0.x
   Importance: Undecided
   Status: New

** Also 

[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-06-25 Thread Roman Podoliaka
** Also affects: mos/5.0.x
   Importance: Undecided
   Status: New

** Also affects: mos/5.1.x
   Importance: High
 Assignee: Roman Podoliaka (rpodolyaka)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Triaged
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack:
  In Progress
Status in Mirantis OpenStack 5.0.x series:
  New
Status in Mirantis OpenStack 5.1.x series:
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-06-25 Thread Roman Podoliaka
** Also affects: mos/5.1.x
   Importance: High
 Assignee: MOS Nova (mos-nova)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 4.1.x series:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  In Progress
Status in Mirantis OpenStack:
  In Progress
Status in Mirantis OpenStack 5.0.x series:
  In Progress
Status in Mirantis OpenStack 5.1.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-06-20 Thread Roman Podoliaka
** Also affects: mos
   Importance: Undecided
   Status: New

** Tags added: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Triaged
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316475] Re: [SRU] CloudSigma DS for causes hangs when serial console present

2014-06-13 Thread Roman Podoliaka
** Changed in: diskimage-builder
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1316475

Title:
  [SRU] CloudSigma DS for causes hangs when serial console present

Status in Init scripts for use on cloud images:
  Fix Committed
Status in Openstack disk image builder:
  Fix Released
Status in tripleo - openstack on openstack:
  Triaged
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Trusty:
  Triaged

Bug description:
  SRU Justification

  Impact: The Cloud Sigma Datasource read and writes to /dev/ttyS1 if
  present; the Datasource does not have a time out. On non-CloudSigma
  Clouds or systems w/ /dev/ttyS1, Cloud-init will block pending a
  response, which may never come. Further, it is dangerous for a default
  datasource to write blindly on a serial console as other control plane
  software and Clouds use /dev/ttyS1 for communication.

  Fix: The patch disables Cloud Sigma by default.

  Verification:
  1. Purge Cloud-init
  2. Install from -proposed
  3. Look in /etc/cloud/cloud.d/90_dpkg.cfg, and confirm CloudSigma is not in 
the list of datasources. 

  Regression: The risk is low, except on CloudSigma targets which try to
  use new images generated with the new Cloud-init version.

  
  [Original Report]
  DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x7e777c23)
  DHCPREQUEST of 10.22.157.186 on eth2 to 255.255.255.255 port 67 
(xid=0x7e777c23)
  DHCPOFFER of 10.22.157.186 from 10.22.157.149
  DHCPACK of 10.22.157.186 from 10.22.157.149
  bound to 10.22.157.186 -- renewal in 39589 seconds.
   * Starting Mount network filesystems[ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping Mount network filesystems[ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]

  And it stops there.

  I see this on about 10% of deploys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1316475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322599] Re: nova baremetal-node-create fails with HTTP 500

2014-05-23 Thread Roman Podoliaka
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322599

Title:
  nova baremetal-node-create fails with HTTP 500

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  baremetal-node-create fails with HTTP 500 as pm_password we pass is
  bigger than allowed; DB schema reports VARCHAR255

  trace is as follows:

  nova.openstack.common.db.sqlalchemy.session   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/session.py",
 line 439, in _wrap
  nova.openstack.common.db.sqlalchemy.session return f(self, *args, 
**kwargs)
  nova.openstack.common.db.sqlalchemy.session   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/session.py",
 line 705, in flush
  nova.openstack.common.db.sqlalchemy.session return super(Session, 
self).flush(*args, **kwargs)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1824, in 
flush
  nova.openstack.common.db.sqlalchemy.session self._flush(objects)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1942, in 
_flush
  nova.openstack.common.db.sqlalchemy.session 
transaction.rollback(_capture_exception=True)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 58, 
in __exit__
  nova.openstack.common.db.sqlalchemy.session compat.reraise(exc_type, 
exc_value, exc_tb)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1906, in 
_flush
  nova.openstack.common.db.sqlalchemy.session flush_context.execute()
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
  nova.openstack.common.db.sqlalchemy.session rec.execute(self)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 525, in 
execute
  nova.openstack.common.db.sqlalchemy.session uow
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 64, in 
save_obj
  nova.openstack.common.db.sqlalchemy.session table, insert)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 569, 
in _emit_insert_statements
  nova.openstack.common.db.sqlalchemy.session execute(statement, params)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  nova.openstack.common.db.sqlalchemy.session params)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  nova.openstack.common.db.sqlalchemy.session compiled_sql, distilled_params
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  nova.openstack.common.db.sqlalchemy.session context)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1024, in 
_handle_dbapi_exception
  nova.openstack.common.db.sqlalchemy.session exc_info
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 196, in 
raise_from_cause
  nova.openstack.common.db.sqlalchemy.session reraise(type(exception), 
exception, tb=exc_tb)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 867, in 
_execute_context
  nova.openstack.common.db.sqlalchemy.session context)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 324, in 
do_execute
  nova.openstack.common.db.sqlalchemy.session cursor.execute(statement, 
parameters)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute
  nova.openstack.common.db.sqlalchemy.session self.errorhandler(self, exc, 
value)
  nova.openstack.common.db.sqlalchemy.session   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  nova.openstack.common.db.sqlalchemy.session raise errorclass, errorvalue
  nova.openstack.common.db.sqlalchemy.session DataError: (DataError) (1406, 
"Data too long for column 'pm_password' at row 1") 'INSERT INTO bm_nodes 

[Yahoo-eng-team] [Bug 1308489] [NEW] Wrong issubclass() hook behavior in PluginInterface

2014-04-16 Thread Roman Podoliaka
Public bug reported:

Currently, PluginInterface provides an issubclass() hook that returns True for 
issubclass(A, B) call, if all abstract methods of A (stored in 
A.__abstractmethods__) can be found in the B.__mro__ tuple of classes. But 
there is an edge case, when A doesn't  have any abstract methods, which leads 
to issubclass(A, B) call returning True even if A and B are not related all.

E.g. issubclass(NeutronPluginPLUMgridV2, NsxPlugin) returns True, while these 
two are different core plugins. And it gets even more trickier when 
superclasses are involved: e.g. SecurityGroupDbMixin is a superclass of 
NsxPlugin, so depending on the fact whether the python module with NsxPlugin 
class is imported or not, issubclass(NeutronPluginPLUMgridV2, 
SecurityGroupDbMixin) will return either False or True accordingly.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1308489

Title:
  Wrong issubclass() hook behavior in PluginInterface

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, PluginInterface provides an issubclass() hook that returns True 
for issubclass(A, B) call, if all abstract methods of A (stored in 
A.__abstractmethods__) can be found in the B.__mro__ tuple of classes. But 
there is an edge case, when A doesn't  have any abstract methods, which leads 
to issubclass(A, B) call returning True even if A and B are not related all.
  
  E.g. issubclass(NeutronPluginPLUMgridV2, NsxPlugin) returns True, while these 
two are different core plugins. And it gets even more trickier when 
superclasses are involved: e.g. SecurityGroupDbMixin is a superclass of 
NsxPlugin, so depending on the fact whether the python module with NsxPlugin 
class is imported or not, issubclass(NeutronPluginPLUMgridV2, 
SecurityGroupDbMixin) will return either False or True accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1308489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290234] Re: do not use __builtin__ in Python3

2014-03-21 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290234

Title:
  do not use __builtin__ in Python3

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Trove - Database as a Service:
  New
Status in Tuskar:
  Fix Released

Bug description:
  __builtin__ does not exist in Python 3, use six.moves.builtins
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285478] Re: Enforce alphabetical ordering in requirements file

2014-03-21 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285478

Title:
  Enforce alphabetical ordering in requirements file

Status in Cinder:
  Invalid
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Message Queuing Service (Marconi):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Triaged
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  New
Status in Python client library for heat:
  New
Status in Python client library for Ironic:
  Fix Committed
Status in Python client library for Neutron:
  New
Status in Trove client binding:
  In Progress
Status in OpenStack contribution dashboard:
  New
Status in Storyboard database creator:
  In Progress
Status in Tempest:
  In Progress
Status in Trove - Database as a Service:
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  
  Sorting requirement files in alphabetical order makes code more readable, and 
can check whether specific library
  in the requirement files easily. Hacking donesn't check *.txt files.
  We had  enforced  this check in oslo-incubator 
https://review.openstack.org/#/c/66090/.

  This bug is used to track syncing the check gating.

  How to sync this to other projects:

  1.  Copy  tools/requirements_style_check.sh  to project/tools.

  2. run tools/requirements_style_check.sh  requirements.txt test-
  requirements.txt

  3. fix the violations

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1285478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284677] Re: Python 3: do not use 'unicode()'

2014-03-21 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1284677

Title:
  Python 3: do not use 'unicode()'

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Python client library for Glance:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  The unicode() function is Python2-specific, we should use
  six.text_type() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1284677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277168] Re: having oslo.sphinx in namespace package causes issues with devstack

2014-03-21 Thread Roman Podoliaka
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277168

Title:
  having oslo.sphinx in namespace package causes issues with devstack

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in Django OpenStack Auth:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Core Infrastructure:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Tempest:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2014-January/023759.html

  We've decided to rename oslo.sphinx to oslosphinx. This will require
  small changes in the doc builds for a lot of the other projects.

  The problem seems to be when we pip install -e oslo.config on the
  system, then pip install oslo.sphinx in a venv. oslo.config is
  unavailable in the venv, apparently because the namespace package for
  o.s causes the egg-link for o.c to be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292284] Re: A few tests from test_sqlalchemy_utils fail on SQLAlchemy 0.9.x

2014-03-13 Thread Roman Podoliaka
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292284

Title:
  A few tests from test_sqlalchemy_utils fail on SQLAlchemy 0.9.x

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  When SQLAlchemy 0.9.x releases are used, some tests from
  test_sqlalchemy_utils module fail on assertRaises(). This is caused by
  the fact that we except SQLAlchemy to raise exceptions when reflecting
  SQLite custom data types from existing DB schema. This was true for SA
  0.7.x and 0.8.x branches, but is fixed in 0.9.x releases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1292284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292285] [NEW] equal_any() DB API helper produces incorrect SQL query

2014-03-13 Thread Roman Podoliaka
Public bug reported:

Given an attribute name and a list of values equal_any() is meant to
produce a WHERE clause which returns rows for which the column (denoted
by an attribute of an SQLAlchemy model) is equal to ANY of passed values
that involves using of SQL OR operator. In fact, AND operator is used to
combine equality expressions.

E.g. for a model:

class Instance(BaseModel):
__tablename__ = 'instances'

   id = sa.Column('id', sa.Integer, primary_key=True)
   ...
   task_state = sa.Column('task_state', sa.String(30))

using of equal_any():

  q = model_query(context, Instance).
  constraint = Constraint({'task_state': equal_any('error', 'deleting')})
  q = constraint.apply(Instance, q)

will produce:

SELECT * from instances
WHERE task_state = 'error' AND task_state = 'deleting'

instead of expected:

SELECT * from instances
WHERE task_state = 'error' OR task_state = 'deleting'

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: db

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292285

Title:
  equal_any() DB API helper produces incorrect SQL query

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Given an attribute name and a list of values equal_any() is meant to
  produce a WHERE clause which returns rows for which the column
  (denoted by an attribute of an SQLAlchemy model) is equal to ANY of
  passed values that involves using of SQL OR operator. In fact, AND
  operator is used to combine equality expressions.

  E.g. for a model:

  class Instance(BaseModel):
  __tablename__ = 'instances'

 id = sa.Column('id', sa.Integer, primary_key=True)
 ...
 task_state = sa.Column('task_state', sa.String(30))

  using of equal_any():

q = model_query(context, Instance).
constraint = Constraint({'task_state': equal_any('error', 'deleting')})
q = constraint.apply(Instance, q)

  will produce:

  SELECT * from instances
  WHERE task_state = 'error' AND task_state = 'deleting'

  instead of expected:

  SELECT * from instances
  WHERE task_state = 'error' OR task_state = 'deleting'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1292285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292284] [NEW] A few tests from test_sqlalchemy_utils fail on SQLAlchemy 0.9.x

2014-03-13 Thread Roman Podoliaka
Public bug reported:

When SQLAlchemy 0.9.x releases are used, some tests from
test_sqlalchemy_utils module fail on assertRaises(). This is caused by
the fact that we except SQLAlchemy to raise exceptions when reflecting
SQLite custom data types from existing DB schema. This was true for SA
0.7.x and 0.8.x branches, but is fixed in 0.9.x releases.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: db

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292284

Title:
  A few tests from test_sqlalchemy_utils fail on SQLAlchemy 0.9.x

Status in OpenStack Compute (Nova):
  New

Bug description:
  When SQLAlchemy 0.9.x releases are used, some tests from
  test_sqlalchemy_utils module fail on assertRaises(). This is caused by
  the fact that we except SQLAlchemy to raise exceptions when reflecting
  SQLite custom data types from existing DB schema. This was true for SA
  0.7.x and 0.8.x branches, but is fixed in 0.9.x releases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1292284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253497] Re: Replace uuidutils.generate_uuid() with str(uuid.uuid4())

2014-02-28 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253497

Title:
  Replace uuidutils.generate_uuid() with str(uuid.uuid4())

Status in Project Barbican:
  Fix Committed
Status in BillingStack:
  In Progress
Status in Cinder:
  In Progress
Status in Climate:
  In Progress
Status in Designate:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in Manila:
  Fix Released
Status in Murano Project:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in OpenStack Data Processing (Savanna):
  Fix Released
Status in Staccato VM Image And Data Transfer Service:
  In Progress
Status in Taskflow for task-oriented systems.:
  In Progress
Status in Trove - Database as a Service:
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2013-November/018980.html

  
  > Hi all,
  >
  > We had a discussion of the modules that are incubated in Oslo.
  >
  > https://etherpad.openstack.org/p/icehouse-oslo-status
  >
  > One of the conclusions we came to was to deprecate/remove uuidutils in
  > this cycle.
  >
  > The first step into this change should be to remove generate_uuid() from
  > uuidutils.
  >
  > The reason is that 1) generating the UUID string seems trivial enough to
  > not need a function and 2) string representation of uuid4 is not what we
  > want in all projects.
  >
  > To address this, a patch is now on gerrit.
  > https://review.openstack.org/#/c/56152/
  >
  > Each project should directly use the standard uuid module or implement its
  > own helper function to generate uuids if this patch gets in.
  >
  > Any thoughts on this change? Thanks.
  >

  Unfortunately it looks like that change went through before I caught up on
  email. Shouldn't we have removed its use in the downstream projects (at
  least integrated projects) before removing it from Oslo?

  Doug

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1253497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282514] Re: python 3 only has "__self__", the "im_self" should be replace by "__self_"

2014-02-28 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282514

Title:
  python 3 only has  "__self__", the "im_self" should be replace by
  "__self_"

Status in Cinder:
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  for code compatible with Python 3, we should use the "__self__" instead of 
"im_self".
  for example :
  cinder/volume/flows/common.py

  def make_pretty_name(method):
  """Makes a pretty name for a function/method."""
  meth_pieces = [method.__name__]
  # If its an instance method attempt to tack on the class name
  if hasattr(method, 'im_self') and method.im_self is not None:
  try:
  meth_pieces.insert(0, method.im_self.__class__.__name__)
  except AttributeError:
  pass
  return ".".join(meth_pieces)

  For reference here(thanks Alex for adding this):
  "Changed in version 2.6: For Python 3 forward-compatibility, im_func is also 
available as __func__, and im_self as __self__."
  http://docs.python.org/2/reference/datamodel.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1282514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280100] Re: StringIO.StringIO is incompatible for python 3

2014-02-24 Thread Roman Podoliaka
** Changed in: python-tuskarclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280100

Title:
  StringIO.StringIO is incompatible for python 3

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in Python client library for Neutron:
  Invalid
Status in OpenStack Command Line Client:
  Invalid
Status in Python client library for Tuskar:
  Fix Released
Status in Tempest:
  In Progress
Status in Trove - Database as a Service:
  In Progress
Status in Tuskar:
  Invalid
Status in Tuskar UI:
  Invalid

Bug description:
  Import StringIO
  StringIO.StringIO()

  should be :
  Import six
  six.StringIO() or six.BytesIO()

  StringIO works for unicode
  BytesIO works for bytes

  For Python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-02-24 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed
Status in OpenStack Data Processing (Savanna):
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  ==> scheduler.log <==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
"Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0") None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2014-02-24 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Cinder:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Neutron:
  In Progress
Status in Trove client binding:
  In Progress
Status in OpenStack Data Processing (Savanna):
  Fix Committed
Status in Trove - Database as a Service:
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280692] Re: old keystone paste configuration embedded in keystone.conf template

2014-02-24 Thread Roman Podoliaka
** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1280692

Title:
  old keystone paste configuration embedded in keystone.conf template

Status in OpenStack Identity (Keystone):
  Invalid
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Keystone has started crashing during devtest with the error
  CRITICAL keystone [-] ConfigFileNotFound: The Keystone configuration file 
keystone-paste.ini could not be found.

  The timing and code touched by this commit
  https://review.openstack.org/#/c/73621 seems to suggest its relevant

  paste_deploy.config_file now defaults to keystone-paste.ini, in
  keystone we don't have a keystone-paste.ini as we have  paste configs
  in keystone.conf

  A commit to verify reversing this would fix the problem
  
https://review.openstack.org/#/c/73838/1/elements/keystone/os-apply-config/etc/keystone/keystone.conf

  shows ci passing again
  https://jenkins02.openstack.org/job/check-tripleo-seed-precise/359/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1280692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2014-01-21 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Invalid
Status in Gantt:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Manila:
  New
Status in OpenStack Message Queuing Service (Marconi):
  Triaged
Status in OpenStack Compute (Nova):
  Triaged
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  Fix Committed
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Nova:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269008] Re: Icehouse failing with libsqlite >= 3.8.2

2014-01-15 Thread Roman Podoliaka
libsqlite version 3.8.2 changed the format of error messages
(http://repo.or.cz/w/sqlite.git/commit/6b889b7f5759b998436b8c05848b8706cc4e62ac).
We should update the common DB code in oslo to parse unique constraint
errors correctly.

** Summary changed:

- Icehouse failing with sqlalchemy 0.8.3
+ Icehouse failing with libsqlite >= 3.8.2

** Project changed: nova => oslo

** Changed in: oslo
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: oslo
   Status: New => Confirmed

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269008

Title:
  Icehouse failing with libsqlite >= 3.8.2

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Confirmed

Bug description:
  ==
  FAIL: 
nova.tests.virt.docker.test_driver.DockerDriverTestCase.test_create_container_wrong_image
  tags: worker-1
  --
  Empty attachments:
pythonlogging:''
stdout

  stderr: {{{
  
/tmp/buildd/nova-2014.1~b1+master/nova/openstack/common/db/sqlalchemy/session.py:480:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
m = _DUP_KEY_RE_DB[engine_name].match(integrity_error.message)
  }}}

  Traceback (most recent call last):
File 
"/tmp/buildd/nova-2014.1~b1+master/nova/tests/virt/docker/test_driver.py", line 
184, in test_create_container_wrong_image
  image_info)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 394, in 
assertRaises
  self.assertThat(our_callable, matcher)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  mismatch = matcher.match(matchee)
File "/usr/lib/python2.7/dist-packages/testtools/matchers/_exception.py", 
line 99, in match
  mismatch = self.exception_matcher.match(exc_info)
File "/usr/lib/python2.7/dist-packages/testtools/matchers/_higherorder.py", 
line 61, in match
  mismatch = matcher.match(matchee)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 386, in 
match
  reraise(*matchee)
File "/usr/lib/python2.7/dist-packages/testtools/matchers/_exception.py", 
line 92, in match
  result = matchee()
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 883, in 
__call__
  return self._callable_object(*self._args, **self._kwargs)
File 
"/tmp/buildd/nova-2014.1~b1+master/nova/tests/virt/docker/test_driver.py", line 
147, in test_create_container
  instance_href = utils.get_test_instance()
File "/tmp/buildd/nova-2014.1~b1+master/nova/tests/utils.py", line 78, in 
get_test_instance
  flavor = get_test_flavor(context)
File "/tmp/buildd/nova-2014.1~b1+master/nova/tests/utils.py", line 67, in 
get_test_flavor
  flavor_ref = nova.db.flavor_create(context, test_flavor)
File "/tmp/buildd/nova-2014.1~b1+master/nova/db/api.py", line 1419, in 
flavor_create
  return IMPL.flavor_create(context, values, projects=projects)
File "/tmp/buildd/nova-2014.1~b1+master/nova/db/sqlalchemy/api.py", line 
112, in wrapper
  return f(*args, **kwargs)
File "/tmp/buildd/nova-2014.1~b1+master/nova/db/sqlalchemy/api.py", line 
4219, in flavor_create
  raise db_exc.DBError(e)
  DBError: (IntegrityError) UNIQUE constraint failed: instance_types.flavorid, 
instance_types.deleted u'INSERT INTO instance_types (created_at, updated_at, 
deleted_at, deleted, name, memory_mb, vcpus, root_gb, ephemeral_gb, flavorid, 
swap, rxtx_factor, vcpu_weight, disabled, is_public) VALUES (?, ?, ?, ?, ?, ?, 
?, ?, ?, ?, ?, ?, ?, ?, ?)' ('2014-01-14 14:46:19.926082', None, None, 0, 
'kinda.big', 2048, 4, 40, 80, 'someid', 1024, 1.0, None, 0, 1)
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261728] [NEW] Interprocess file locks aren't usable in unit tests

2013-12-17 Thread Roman Podoliaka
Public bug reported:

Base test case class has a fixture, that overrides CONF.lock_path value,
which means that every test case will have CONF.lock_path set to its own
temporary dir path. This makes interprocess locks unusable in unit
tests, which is likely to break tests when they are run concurrently
using testr (e.g. tests using MySQL/PostgreSQL might want to be run
exclusively).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261728

Title:
  Interprocess file locks aren't usable in unit tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  Base test case class has a fixture, that overrides CONF.lock_path
  value, which means that every test case will have CONF.lock_path set
  to its own temporary dir path. This makes interprocess locks unusable
  in unit tests, which is likely to break tests when they are run
  concurrently using testr (e.g. tests using MySQL/PostgreSQL might want
  to be run exclusively).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227892] Re: db migration timeout in unit tests

2013-12-06 Thread Roman Podoliaka
*** This bug is a duplicate of bug 1216851 ***
https://bugs.launchpad.net/bugs/1216851

** This bug has been marked a duplicate of bug 1216851
   nova unit tests occasionally fail migration tests for mysql and postgres

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1227892

Title:
  db migration timeout in unit tests

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Seeing odd timeouts in the db test_migrations

  ==
  2013-09-19 20:19:52.147 | FAIL: 
nova.tests.db.test_migrations.TestNovaMigrations.test_mysql_opportunistically
  2013-09-19 20:19:52.147 | tags: worker-2
  2013-09-19 20:19:52.147 | 
--
  2013-09-19 20:19:52.147 | Empty attachments:
  2013-09-19 20:19:52.148 |   stderr
  2013-09-19 20:19:52.148 |   stdout
  2013-09-19 20:19:52.148 | 
  2013-09-19 20:19:52.148 | pythonlogging:'': {{{
  2013-09-19 20:19:52.148 | 132 -> 133... 
  2013-09-19 20:19:52.148 | done
  2013-09-19 20:19:52.148 | 133 -> 134... 
  2013-09-19 20:19:52.149 | done
  2013-09-19 20:19:52.149 | 134 -> 135... 
  2013-09-19 20:19:52.149 | done
  2013-09-19 20:19:52.149 | 135 -> 136... 
  2013-09-19 20:19:52.149 | done
  2013-09-19 20:19:52.149 | 136 -> 137... 
  2013-09-19 20:19:52.149 | done
  2013-09-19 20:19:52.150 | 137 -> 138... 
  2013-09-19 20:19:52.150 | done
  2013-09-19 20:19:52.150 | 138 -> 139... 
  2013-09-19 20:19:52.150 | done
  2013-09-19 20:19:52.150 | 139 -> 140... 
  2013-09-19 20:19:52.150 | done
  2013-09-19 20:19:52.150 | 140 -> 141... 
  2013-09-19 20:19:52.151 | done
  2013-09-19 20:19:52.151 | 141 -> 142... 
  2013-09-19 20:19:52.151 | done
  2013-09-19 20:19:52.151 | 142 -> 143... 
  2013-09-19 20:19:52.151 | done
  2013-09-19 20:19:52.151 | 143 -> 144... 
  2013-09-19 20:19:52.151 | done
  2013-09-19 20:19:52.151 | 144 -> 145... 
  2013-09-19 20:19:52.152 | done
  2013-09-19 20:19:52.152 | 145 -> 146... 
  2013-09-19 20:19:52.152 | done
  2013-09-19 20:19:52.152 | 146 -> 147... 
  2013-09-19 20:19:52.152 | done
  2013-09-19 20:19:52.152 | 147 -> 148... 
  2013-09-19 20:19:52.152 | Failed to migrate to version 148 on engine 
Engine(mysql+mysqldb://openstack_citest:openstack_citest@localhost/openstack_citest)
  2013-09-19 20:19:52.153 | }}}
  2013-09-19 20:19:52.153 | 
  2013-09-19 20:19:52.153 | Traceback (most recent call last):
  2013-09-19 20:19:52.153 |   File "nova/tests/db/test_migrations.py", line 
162, in test_mysql_opportunistically
  2013-09-19 20:19:52.153 | self._test_mysql_opportunistically()
  2013-09-19 20:19:52.153 |   File "nova/tests/db/test_migrations.py", line 
321, in _test_mysql_opportunistically
  2013-09-19 20:19:52.153 | self._walk_versions(engine, False, False)
  2013-09-19 20:19:52.154 |   File "nova/tests/db/test_migrations.py", line 
378, in _walk_versions
  2013-09-19 20:19:52.154 | self._migrate_up(engine, version, 
with_data=True)
  2013-09-19 20:19:52.154 |   File "nova/tests/db/test_migrations.py", line 
436, in _migrate_up
  2013-09-19 20:19:52.154 | self.migration_api.upgrade(engine, 
self.REPOSITORY, version)
  2013-09-19 20:19:52.154 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/api.py",
 line 186, in upgrade
  2013-09-19 20:19:52.154 | return _migrate(url, repository, version, 
upgrade=True, err=err, **opts)
  2013-09-19 20:19:52.154 |   File "", line 2, in _migrate
  2013-09-19 20:19:52.155 |   File "nova/db/sqlalchemy/migration.py", line 40, 
in patched_with_engine
  2013-09-19 20:19:52.155 | return f(*a, **kw)
  2013-09-19 20:19:52.155 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/api.py",
 line 366, in _migrate
  2013-09-19 20:19:52.155 | schema.runchange(ver, change, changeset.step)
  2013-09-19 20:19:52.155 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/schema.py",
 line 91, in runchange
  2013-09-19 20:19:52.155 | change.run(self.engine, step)
  2013-09-19 20:19:52.155 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/script/py.py",
 line 145, in run
  2013-09-19 20:19:52.156 | script_func(engine)
  2013-09-19 20:19:52.156 |   File 
"/home/jenkins/workspace/gate-nova-python27/nova/db/sqlalchemy/migrate_repo/versions/148_add_instance_actions.py",
 line 67, in upgrade
  2013-09-19 20:19:52.156 | instance_actions.create()
  2013-09-19 20:19:52.156 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/schema.py",
 line 593, in create
  2013-09-19 20:19:52.156 | checkfirst=checkfirst)
  2013-09-19 20:19:52.156 |

[Yahoo-eng-team] [Bug 1178103] Re: can't disable file injection for bare metal

2013-11-22 Thread Roman Podoliaka
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178103

Title:
  can't disable file injection for bare metal

Status in Ironic (Bare Metal Provisioning):
  Triaged
Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  For two reasons : a) until we have quantum-pxe done, it won't work,
  and b) file injection always happens.

  One of the reasons to want to disable file injection is to work with
  hardware that gets a ethernet interface other than 'eth0' - e.g. if
  only eth1 is plugged in on the hardware, file injection with it's
  hardcoded parameters interferes with network bringup.

  A workaround for homogeneous environments is to change the template to
  hardcode the interface name (s/iface.name/eth2/)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1178103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251700] Re: migration error: invalid version number '0.7.3.dev'

2013-11-21 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251700

Title:
  migration error: invalid version number '0.7.3.dev'

Status in Ironic (Bare Metal Provisioning):
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  Using a tripleO seed VM I hit this issue today when trying to run the
  nova db migrations:

  (nova)[root@localhost migrate]#  /opt/stack/venvs/nova/bin/nova-manage 
--debug --verbose db sync
  Command failed, please check log for more info
  2013-11-15 16:53:18,579.579 9082 CRITICAL nova [-] invalid version number 
'0.7.3.dev'
  2013-11-15 16:53:18,579.579 9082 TRACE nova Traceback (most recent call last):
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/bin/nova-manage", line 10, in 
  2013-11-15 16:53:18,579.579 9082 TRACE nova sys.exit(main())
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/cmd/manage.py", line 
1378, in main
  2013-11-15 16:53:18,579.579 9082 TRACE nova ret = fn(*fn_args, 
**fn_kwargs)
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/cmd/manage.py", line 
886, in sync
  2013-11-15 16:53:18,579.579 9082 TRACE nova return 
migration.db_sync(version)
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/db/migration.py", line 
31, in db_sync
  2013-11-15 16:53:18,579.579 9082 TRACE nova return 
IMPL.db_sync(version=version)
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/utils.py", line 438, in 
__getattr__
  2013-11-15 16:53:18,579.579 9082 TRACE nova backend = self.__get_backend()
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/utils.py", line 434, in 
__get_backend
  2013-11-15 16:53:18,579.579 9082 TRACE nova self.__backend = 
__import__(name, None, None, fromlist)
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/db/sqlalchemy/migration.py",
 line 52, in 
  2013-11-15 16:53:18,579.579 9082 TRACE nova 
dist_version.StrictVersion(migrate.__version__) < MIN_PKG_VERSION):
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/usr/lib64/python2.7/distutils/version.py", line 40, in __init__
  2013-11-15 16:53:18,579.579 9082 TRACE nova self.parse(vstring)
  2013-11-15 16:53:18,579.579 9082 TRACE nova   File 
"/usr/lib64/python2.7/distutils/version.py", line 107, in parse
  2013-11-15 16:53:18,579.579 9082 TRACE nova raise ValueError, "invalid 
version number '%s'" % vstring
  2013-11-15 16:53:18,579.579 9082 TRACE nova ValueError: invalid version 
number '0.7.3.dev'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1251700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp