[Yahoo-eng-team] [Bug 1460652] [NEW] nova-conductor infinitely reconnets to rabbit

2015-06-01 Thread Michael Kazakov
Public bug reported:

1. Exact version of Nova 
ii  nova-api
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - API frontend
ii  nova-cert   
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - certificate management
ii  nova-common 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - common files
ii  nova-conductor  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - conductor service
ii  nova-console
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console
ii  nova-consoleauth
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console Authenticatorii  nova-novncproxy 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - NoVNC proxy
ii  nova-scheduler  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - virtual machine scheduler
ii  python-nova 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute Python libraries
ii  python-novaclient   
1:2.17.0.74.g2598714+git201404220131~trusty-0ubuntu1 all  client 
library for OpenStack Compute API

rabbit configuration in nova.conf:

  rabbit_hosts = m610-2:5672, m610-1:5672
  rabbit_ha_queues =  true


2. Relevant log files:
/var/log/nova/nova-conductor.log

 exchange 'reply_bea18a6133c548f099b85b168fddf83c' in vhost '/'
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
624, in ensure
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
return method(*args, **kwargs)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
729, in _publish
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
publisher = cls(self.conf, self.channel, topic, **kwargs)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
361, in __init__
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
type='direct', **options)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
326, in __init__
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.reconnect(channel)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
334, in reconnect
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
routing_key=self.routing_key)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 82, in __init__
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.revive(self._channel)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 216, in revive
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.declare()
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/kombu/messaging.py", line 102, in declare
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.exchange.declare()
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/kombu/entity.py", line 166, in declare
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
nowait=nowait, passive=passive,
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/amqp/channel.py", line 612, in 
exchange_declare
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
(40, 11),  # Channel.exchange_declare_ok
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 75, in wait
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
return self.dispatch_method(method_sig, args, content)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/amqp/abstr

[Yahoo-eng-team] [Bug 1337264] [NEW] live migaraions fails if used libvirt_cpu_mode=host-passthrough option

2014-07-03 Thread Michael Kazakov
Public bug reported:

Livemigartions fials with libvirt error in nova-compute.log:

ERROR nova.virt.libvirt.driver [-] [instance:
d8234ed4-1c7b-4683-afc6-0f481f91c6e4] Live Migration failure: internal
error: cannot load AppArmor profile 'libvirt-
d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

libvirtd.log:

warning : qemuDomainObjTaint:1628 : Domain id=6 name='instance-0154' 
uuid=d8234ed4-1c7b-4683-afc6-0f481f91c6e4 is tainted: host-cpu
error : virNetClientProgramDispatchError:175 : internal error: cannot load 
AppArmor profile 'libvirt-d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

libvirt-bin 1.2.2-0ubuntu13.1 
nova-compute1:2014.1+git201406232336~trusty-0ubuntu1  
Host CPU model Intel(R) Xeon(R) CPU E5-2695 v2

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- live migaraions fails if used libvirt_cpu_mode=host-passthrough optin
+ live migaraions fails if used libvirt_cpu_mode=host-passthrough option

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337264

Title:
  live migaraions fails if used libvirt_cpu_mode=host-passthrough option

Status in OpenStack Compute (Nova):
  New

Bug description:
  Livemigartions fials with libvirt error in nova-compute.log:

  ERROR nova.virt.libvirt.driver [-] [instance:
  d8234ed4-1c7b-4683-afc6-0f481f91c6e4] Live Migration failure: internal
  error: cannot load AppArmor profile 'libvirt-
  d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

  libvirtd.log:

  warning : qemuDomainObjTaint:1628 : Domain id=6 name='instance-0154' 
uuid=d8234ed4-1c7b-4683-afc6-0f481f91c6e4 is tainted: host-cpu
  error : virNetClientProgramDispatchError:175 : internal error: cannot load 
AppArmor profile 'libvirt-d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

  libvirt-bin 1.2.2-0ubuntu13.1 
  nova-compute1:2014.1+git201406232336~trusty-0ubuntu1  
  Host CPU model Intel(R) Xeon(R) CPU E5-2695 v2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314130] [NEW] network freezes for some seconds after service neutron-plugin-openvswitch-agent restart

2014-04-29 Thread Michael Kazakov
Public bug reported:

network freezes for some seconds after service neutron-plugin-openvswitch-agent 
restart
Ubuntu 14.04 
latest neutron code from 
http://ppa.launchpad.net/openstack-ubuntu-testing/icehouse/ubuntu
ovs-vsctl (Open vSwitch) 2.0.1

neutron-openvswitch-agent log:
2014-04-29 10:44:12.836 36987 ERROR neutron.agent.linux.ovsdb_monitor 
[req-c5aeb93c-2254-4dc4-af5f-6df3a69995f7 None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)
2014-04-29 10:44:12.968 36987 ERROR neutron.agent.linux.ovs_lib 
[req-c5aeb93c-2254-4dc4-af5f-6df3a69995f7 None] Unable to execute ['ovs-vsctl', 
'--timeout=10', 'list-ports', 'br-int']. Exception:
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=10', 'list-ports', 'br-int']
Exit code: 1
Stdout: ''
Stderr: 
'2014-04-29T09:44:12Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: 
connection attempt failed (No such file or directory)\novs-vsctl: 
unix:/var/run/openvswitch/db.sock: database connection failed (No such file or 
directory)\n'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1314130

Title:
  network freezes for some seconds after service neutron-plugin-
  openvswitch-agent restart

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  network freezes for some seconds after service 
neutron-plugin-openvswitch-agent restart
  Ubuntu 14.04 
  latest neutron code from 
http://ppa.launchpad.net/openstack-ubuntu-testing/icehouse/ubuntu
  ovs-vsctl (Open vSwitch) 2.0.1

  neutron-openvswitch-agent log:
  2014-04-29 10:44:12.836 36987 ERROR neutron.agent.linux.ovsdb_monitor 
[req-c5aeb93c-2254-4dc4-af5f-6df3a69995f7 None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)
  2014-04-29 10:44:12.968 36987 ERROR neutron.agent.linux.ovs_lib 
[req-c5aeb93c-2254-4dc4-af5f-6df3a69995f7 None] Unable to execute ['ovs-vsctl', 
'--timeout=10', 'list-ports', 'br-int']. Exception:
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=10', 'list-ports', 'br-int']
  Exit code: 1
  Stdout: ''
  Stderr: 
'2014-04-29T09:44:12Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: 
connection attempt failed (No such file or directory)\novs-vsctl: 
unix:/var/run/openvswitch/db.sock: database connection failed (No such file or 
directory)\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1314130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313752] [NEW] Nova does not updating hostname on metadata service after instance name change.

2014-04-28 Thread Michael Kazakov
Public bug reported:

Create an instance. Change name of the instance.  curl
http://169.254.169.254/latest/meta-data/hostname returns old instance
name.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313752

Title:
  Nova does not updating hostname on metadata service after instance
  name change.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Create an instance. Change name of the instance.  curl
  http://169.254.169.254/latest/meta-data/hostname returns old instance
  name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310340] Re: live migration fails when use long hostname of a nova compute target host

2014-04-27 Thread Michael Kazakov
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310340

Title:
  live migration fails when use long hostname of a nova compute target
  host

Status in OpenStack Compute (Nova):
  Confirmed
Status in “horizon” package in Ubuntu:
  New

Bug description:
  Nova don't do live-migration when used long hostname of target host

  nova show ubuntu14.04
  
+--+---+
  | Property | Value
 |
  
+--+---+
  
..
  | OS-EXT-SRV-ATTR:host | compute2 
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  |  compute2.site.local 
|
  
..

  nova live-migration ubuntu14.04  compute2.site.local
  ERROR (BadRequest): Compute service of  compute2.site.local is unavailable at 
this time. (HTTP 400) (Request-ID: req-f344c0bf-aaa3-47e6-a24c-8f37e89858e4)

  but 
  nova live-migration ubuntu14.04  compute2
  runs without error 

  
  Also if you try to do migration through horizon it always fails because 
horizon use long hostname fo target host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310340] [NEW] live migration fails when use long hostname of a nova compute target host

2014-04-20 Thread Michael Kazakov
Public bug reported:

Nova don't do live-migration when used long hostname of target host

nova show ubuntu14.04
+--+---+
| Property | Value  
   |
+--+---+
..
| OS-EXT-SRV-ATTR:host | compute2   
 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  |  compute2.site.local   
  |
..

nova live-migration ubuntu14.04  compute2.site.local
ERROR (BadRequest): Compute service of  compute2.site.local is unavailable at 
this time. (HTTP 400) (Request-ID: req-f344c0bf-aaa3-47e6-a24c-8f37e89858e4)

but 
nova live-migration ubuntu14.04  compute2
runs without error 


Also if you try to do migration through horizon it always fails because horizon 
use long hostname fo target host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310340

Title:
  live migration fails when use long hostname of a nova compute target
  host

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova don't do live-migration when used long hostname of target host

  nova show ubuntu14.04
  
+--+---+
  | Property | Value
 |
  
+--+---+
  
..
  | OS-EXT-SRV-ATTR:host | compute2 
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  |  compute2.site.local 
|
  
..

  nova live-migration ubuntu14.04  compute2.site.local
  ERROR (BadRequest): Compute service of  compute2.site.local is unavailable at 
this time. (HTTP 400) (Request-ID: req-f344c0bf-aaa3-47e6-a24c-8f37e89858e4)

  but 
  nova live-migration ubuntu14.04  compute2
  runs without error 

  
  Also if you try to do migration through horizon it always fails because 
horizon use long hostname fo target host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309055] [NEW] Post operation of migration fails whith "Connection to neutron failed" error

2014-04-17 Thread Michael Kazakov
Public bug reported:

Post migration operation fails  after successful migration, and fields
"node","host","task_state"  in the table "instances" are not changed in
nova database.  When nova configured to work with neutron:

grep neutron /etc/nova/nova.conf :

network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=pass
neutron_admin_auth_url=http://controller:35357/v2.0
neutron_metadata_proxy_shared_secret = pass
service_neutron_metadata_proxy = true

Latest nova/neutron code in Trusty:
 nova-compute1:2014.1-0ubuntu1 
 python-novaclient   1:2.17.0-0ubuntu1 
python-neutronclient1:2.3.4-0ubuntu1   
 neutron-common  1:2014.1~rc2-0ubuntu4  

 /var/log/nova/nova-compute.log has stacktrace:
TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
TRACE oslo.messaging.rpc.dispatcher incoming.message))
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, 
method, ctxt, args)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, 
method)(ctxt, **new_args)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 399, in 
decorated_function
TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, 
**kwargs)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
TRACE oslo.messaging.rpc.dispatcher payload)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, 
self.tb)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
8 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 309, in 
decorated_function
TRACE oslo.messaging.rpc.dispatcher e, sys.exc_info())
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, 
self.tb)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 296, in 
decorated_function
TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, 
**kwargs)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4669, in 
post_live_migration_at_destination
TRACE oslo.messaging.rpc.dispatcher migration)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 259, in 
network_migrate_instance_finish
TRACE oslo.messaging.rpc.dispatcher migration)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 391, in 
network_migrate_instance_finish
TRACE oslo.messaging.rpc.dispatcher instance=instance_p, 
migration=migration_p)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150, in 
call
TRACE oslo.messaging.rpc.dispatcher wait_for_reply=True, timeout=timeout)
TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
TRACE oslo.messaging.rpc.dispatcher timeout=timeout)
 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
412, in send
 TRACE oslo.messaging.rpc.dispatcher return self._send(target, ctxt, 
message, wait_for_reply, timeout)
 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
405, in _send
TRACE oslo.messaging.rpc.dispatcher raise result
TRACE oslo.messaging.rpc.dispatcher RemoteError: Remote error: ConnectionFailed 
Connection to neutron failed: Maximum attempts reached

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309055

Title:
   Post operation of