[Yahoo-eng-team] [Bug 1379775] Re: Messaging enabled by default in Juno

2014-12-03 Thread Mehdi Abaakouk
When 'notification_driver = noop' is set in glance even the messaging
transport is configured, I don't see any connection to the messaging
broker.


** Changed in: oslo.messaging
   Status: In Progress = Incomplete

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1379775

Title:
  Messaging enabled by default in Juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Messaging API for OpenStack:
  Incomplete

Bug description:
  After configuration Glance using the latest Juno RC packages I got the
  following issue when trying to upload an image:

  2014-10-10 12:26:13.015 12703 ERROR
  oslo.messaging._drivers.impl_rabbit
  [bed8aa08-d1f3-4089-9286-beadc9faeba4 7bd2899605ab4037892e1d48b144f3a6
  c637c767fa044299be5a7d5df65a4db7 - - -] AMQP server localhost:5672
  closed the connection. Check login credentials: Socket closed

  According to the documentation the notification driver should be
  'noop' by default (https://github.com/openstack/glance/blob/master/etc
  /glance-api.conf#L237-L239). But it looks like this parameter has no
  more function in Juno and that only the rpc_backend parameter is
  relevant and that messaging is enabled by default and cannot be
  disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1379775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 855030] Re: Encountering sporadic AMQPChannelException

2014-12-03 Thread Rolf Leggewie
oneiric has seen the end of its life and is no longer receiving any
updates. Marking the oneiric task for this ticket as Won't Fix.

** Changed in: nova (Ubuntu Oneiric)
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/855030

Title:
  Encountering sporadic AMQPChannelException

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) diablo series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Oneiric:
  Won't Fix
Status in nova source package in Precise:
  Fix Released

Bug description:
  Running one controller and one compute, using nova-network version
  2011.3~rc~20110909.r1155-0ub  on Oneiric . Repeating the following:

  Create four instances, terminate them.

  This mostly works but one in every three or four times, one of the vms
  does not come up and I see this error in the nova-network log. This
  can occur both in the controller node (also running compute) or the
  compute node.  (contents of nova.conf follows)

  2011-09-20 13:22:59,295 DEBUG nova.utils [-] Attempting to grab semaphore 
iptables for method apply... from (pid=1082) inner 
/usr/lib/pymodules/python2.7/nova/utils.py:672
  2011-09-20 13:22:59,295 DEBUG nova.utils [-] Attempting to grab file lock 
iptables for method apply... from (pid=1082) inner 
/usr/lib/pymodules/python2.7/nova/utils.py:677
  2011-09-20 13:22:59,296 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
iptables-save -t filter from (pid=1082) execute 
/usr/lib/pymodules/python2.7/nova/utils.py:165
  2011-09-20 13:22:59,311 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
iptables-restore from (pid=1082) execute 
/usr/lib/pymodules/python2.7/nova/utils.py:165
  2011-09-20 13:22:59,350 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
iptables-save -t nat from (pid=1082) execute 
/usr/lib/pymodules/python2.7/nova/utils.py:165
  2011-09-20 13:22:59,366 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
iptables-restore from (pid=1082) execute 
/usr/lib/pymodules/python2.7/nova/utils.py:165
  2011-09-20 13:22:59,424 ERROR nova.rpc [-] Exception during message handling
  (nova.rpc): TRACE: Traceback (most recent call last):
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 628, in 
_process_data
  (nova.rpc): TRACE: ctxt.reply(None, None)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 673, in reply
  (nova.rpc): TRACE: msg_reply(self.msg_id, *args, **kwargs)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 781, in msg_reply
  (nova.rpc): TRACE: conn.direct_send(msg_id, msg)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 562, in __exit__
  (nova.rpc): TRACE: self._done()
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 547, in _done
  (nova.rpc): TRACE: self.connection.reset()
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 382, in reset
  (nova.rpc): TRACE: self.channel.close()
  (nova.rpc): TRACE:   File 
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/channel.py, line 194, in 
close
  (nova.rpc): TRACE: (20, 41),# Channel.close_ok
  (nova.rpc): TRACE:   File 
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/abstract_channel.py, line 
97, in wait
  (nova.rpc): TRACE: return self.dispatch_method(method_sig, args, content)
  (nova.rpc): TRACE:   File 
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/abstract_channel.py, line 
115, in dispatch_method
  (nova.rpc): TRACE: return amqp_method(self, args)
  (nova.rpc): TRACE:   File 
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/channel.py, line 273, in 
_close
  (nova.rpc): TRACE: (class_id, method_id))
  (nova.rpc): TRACE: AMQPChannelException: (404, uNOT_FOUND - no exchange 
'3ff1ba7e274a4ec2a2b0a217d7532c70' in vhost '/', (60, 40), 
'Channel.basic_publish')
  (nova.rpc): TRACE: 
  2011-09-20 13:22:59,451 ERROR nova.rpc [-] Returning exception (404, 
uNOT_FOUND - no exchange '3ff1ba7e274a4ec2a2b0a217d7532c70' in vhost '/', 
(60, 40), 'Channel.basic_publish') to caller
  2011-09-20 13:22:59,452 ERROR nova.rpc [-] ['Traceback (most recent call 
last):\n', '  File /usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 
628, in _process_data\nctxt.reply(None, None)\n', '  File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 673, in reply\n
msg_reply(self.msg_id, *args, **kwargs)\n', '  File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 781, in msg_reply\n 
   conn.direct_send(msg_id, msg)\n', '  File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, line 562, in __exit__\n  
  self._done()\n', '  File 
/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py, 

[Yahoo-eng-team] [Bug 838581] Re: Failures in db_pool code: 'NoneType' object has no attribute '_keymap' or not returning rows.

2014-12-03 Thread Rolf Leggewie
oneiric has seen the end of its life and is no longer receiving any
updates. Marking the oneiric task for this ticket as Won't Fix.

** Changed in: nova (Ubuntu Oneiric)
   Status: Fix Committed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/838581

Title:
  Failures in db_pool code: 'NoneType' object has no attribute '_keymap'
  or not returning rows.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) diablo series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Oneiric:
  Won't Fix
Status in nova source package in Precise:
  Fix Released

Bug description:
  Running rev 1514, getting this on instance build in XenServer.

  2011-09-01 02:38:04,374 ERROR nova.rpc [-] Exception during message handling
  (nova.rpc): TRACE: Traceback (most recent call last):
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.6/nova/rpc/impl_kombu.py, line 620, in 
_process_data
  (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
  (nova.rpc): TRACE:   File /usr/lib/pymodules/python2.6/nova/exception.py, 
line 98, in wrapped
  (nova.rpc): TRACE: return f(*args, **kw)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.6/nova/compute/manager.py, line 453, in 
run_instance
  (nova.rpc): TRACE: self._run_instance(context, instance_id, **kwargs)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.6/nova/compute/manager.py, line 392, in 
_run_instance
  (nova.rpc): TRACE: requested_networks=requested_networks)
  (nova.rpc): TRACE:   File /usr/lib/pymodules/python2.6/nova/network/api.py, 
line 162, in allocate_for_instance
  (nova.rpc): TRACE: 'args': args})
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.6/nova/rpc/__init__.py, line 45, in call
  (nova.rpc): TRACE: return get_impl().call(context, topic, msg)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.6/nova/rpc/impl_kombu.py, line 739, in call
  (nova.rpc): TRACE: rv = list(rv)
  (nova.rpc): TRACE:   File 
/usr/lib/pymodules/python2.6/nova/rpc/impl_kombu.py, line 703, in __iter__
  (nova.rpc): TRACE: raise result
  (nova.rpc): TRACE: RemoteError: AttributeError 'NoneType' object has no 
attribute '_keymap'
  (nova.rpc): TRACE: [u'Traceback (most recent call last):\n', u'  File 
/usr/lib/pymodules/python2.6/nova/rpc/impl_kombu.py, line 620, in 
_process_data\nrval = node_func(context=ctxt, **node_args)\n', u'  File 
/usr/lib/pymodules/python2.6/nova/network/manager.py, line 462, in 
allocate_for_instance\nnetworks, vpn=vpn)\n', u'  File 
/usr/lib/pymodules/python2.6/nova/network/manager.py, line 971, in 
_allocate_fixed_ips_hack\nself.allocate_fixed_ip(context, instance_id, 
nw)\n', u'  File /usr/lib/pymodules/python2.6/nova/network/manager.py, line 
653, in allocate_fixed_ip\ninstance_id)\n', u'  File 
/usr/lib/pymodules/python2.6/nova/db/api.py, line 343, in 
fixed_ip_associate_pool\ninstance_id, host)\n', u'  File 
/usr/lib/pymodules/python2.6/nova/db/sqlalchemy/api.py, line 100, in 
wrapper\nreturn f(*args, **kwargs)\n', u'  File 
/usr/lib/pymodules/python2.6/nova/db/sqlalchemy/api.py, line 714, in 
fixed_ip_associate_pool\nwith_lockmode(\'update\').\\\n'
 , u'  File /usr/lib/python2.6/dist-packages/sqlalchemy/orm/query.py, line 
1496, in first\nret = list(self[0:1])\n', u'  File 
/usr/lib/python2.6/dist-packages/sqlalchemy/orm/query.py, line 1405, in 
__getitem__\nreturn list(res)\n', u'  File 
/usr/lib/python2.6/dist-packages/sqlalchemy/orm/query.py, line 1669, in 
instances\nfetch = cursor.fetchall()\n', u'  File 
/usr/lib/python2.6/dist-packages/sqlalchemy/engine/base.py, line 2383, in 
fetchall\nl = self.process_rows(self._fetchall_impl())\n', u'  File 
/usr/lib/python2.6/dist-packages/sqlalchemy/engine/base.py, line 2366, in 
process_rows\nkeymap = metadata._keymap\n', uAttributeError: 'NoneType' 
object has no attribute '_keymap'\n]
  (nova.rpc): TRACE:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/838581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 871278] Re: Cannot attach volumes to instances if tgt is used

2014-12-03 Thread Rolf Leggewie
oneiric has seen the end of its life and is no longer receiving any
updates. Marking the oneiric task for this ticket as Won't Fix.

** Changed in: nova (Ubuntu Oneiric)
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/871278

Title:
  Cannot attach volumes to instances if tgt is used

Status in OpenStack Compute (Nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Oneiric:
  Won't Fix
Status in nova source package in Precise:
  Fix Released

Bug description:
  I cannot attach a volume to an instance if both are on different
  hosts.

  This is a problem which is specific to tgt which is used on Oneiric. I
  file that bug in Nova (Ubuntu) and not in Nova (Openstack), even
  though it is not an Ubuntu packaging problem.

  Example:

  SW: OpenStack 2011.3 (Diablo) -- Package: 2011.3-0ubuntu6
  OS: Ubuntu 11.10 (Oneiric -- Beta2)

  node1 (192.168.1.201): vol-0004
  node2 (192.168.1.202): i-0004

  root@node1:~# euca-attach-volume -i i-0004 -d /dev/vdc
  vol-0004

  Error seen in nova-compute.log:

  Error: iSCSI device not found at /dev/disk/by-
  path/ip-192.168.1.201:3260-iscsi-
  iqn.2010-10.org.openstack:volume-0004-lun-0

  Explanation:

  nova-compute is waiting for a device to be configured as:

  /dev/disk/by-path/ip-192.168.1.201:3260-iscsi-
  iqn.2010-10.org.openstack:volume-0004-lun-0

  but the device is configured as:

  /dev/disk/by-path/ip-192.168.1.201:3260-iscsi-
  iqn.2010-10.org.openstack:volume-0004-lun-1

  nova-compute waits for xxx-lun-0, but xxx-lun-1 is configured:

  root@node2:/dev/disk/by-path# ls -l
  lrwxrwxrwx 1 root root  9 2011-10-08 22:02 
ip-192.168.1.201:3260-iscsi-iqn.2010-10.org.openstack:volume-0004-lun-1 - 
../../sdd

  I guess this is a difference between the iet and tgt softwares used to
  configure iSCSI targets.

  On Oneiric, tgt is used by default, and iet cannot be used because the
  kernel module is no more available.

  In nova.conf:

  --iscsi_helper=tgtadm

  If I adapt the code from lun-0 to lun-1, everything works well:

  diff /usr/share/pyshared/nova/volume/driver.py-patched 
/usr/share/pyshared/nova/volume/driver.py-unpatched
  536c536
   mount_device = (/dev/disk/by-path/ip-%s-iscsi-%s-lun-1 %
  ---
   mount_device = (/dev/disk/by-path/ip-%s-iscsi-%s-lun-0 %

  Complete nova-compute.log:

  root@node2:~# tail -50 /var/log/nova/nova-compute.log
  2011-10-08 22:02:31,179 DEBUG nova.rpc [-] received {u'_context_roles': 
[u'projectmanager'], u'_context_request_id': 
u'8911d1d6-66eb-48eb-ab9a-f7a387d858fa', u'_context_read_deleted': False, 
u'args': {u'instance_id': 4, u'mountpoint': u'/dev/vdc', u'volume_id': 4}, 
u'_context_auth_token': None, u'_context_strategy': u'noauth', 
u'_context_is_admin': True, u'_context_project_id': u'project-one', 
u'_context_timestamp': u'2011-10-08T20:02:31.145628', u'_context_user_id': 
u'admin', u'method': u'attach_volume', u'_context_remote_address': 
u'192.168.1.201'} from (pid=1585) __call__ 
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:600
  2011-10-08 22:02:31,180 DEBUG nova.rpc [-] unpacked context: {'user_id': 
u'admin', 'roles': [u'projectmanager'], 'timestamp': 
u'2011-10-08T20:02:31.145628', 'auth_token': None, 'msg_id': None, 
'remote_address': u'192.168.1.201', 'strategy': u'noauth', 'is_admin': True, 
'request_id': u'8911d1d6-66eb-48eb-ab9a-f7a387d858fa', 'project_id': 
u'project-one', 'read_deleted': False} from (pid=1585) _unpack_context 
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:646
  2011-10-08 22:02:31,181 INFO nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] check_instance_lock: 
decorating: |function attach_volume at 0x2946230|
  2011-10-08 22:02:31,181 INFO nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] check_instance_lock: 
arguments: |nova.compute.manager.ComputeManager object at 0x21e5790| 
|nova.rpc.impl_kombu.RpcContext object at 0x43fdf10| |4|
  2011-10-08 22:02:31,181 DEBUG nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] instance 4: getting 
locked state from (pid=1585) get_lock 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1165
  2011-10-08 22:02:31,333 INFO nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] check_instance_lock: 
locked: |False|
  2011-10-08 22:02:31,334 INFO nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] check_instance_lock: 
admin: |True|
  2011-10-08 22:02:31,334 INFO nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] check_instance_lock: 
executing: |function attach_volume at 0x2946230|
  2011-10-08 22:02:31,410 AUDIT nova.compute.manager 
[8911d1d6-66eb-48eb-ab9a-f7a387d858fa admin project-one] 

[Yahoo-eng-team] [Bug 879666] Re: chown error for console.fifo when launching vm

2014-12-03 Thread Rolf Leggewie
oneiric has seen the end of its life and is no longer receiving any
updates. Marking the oneiric task for this ticket as Won't Fix.

** Changed in: nova (Ubuntu Oneiric)
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/879666

Title:
  chown error for console.fifo when launching vm

Status in OpenStack Compute (Nova):
  Invalid
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Oneiric:
  Won't Fix
Status in nova source package in Precise:
  Fix Released

Bug description:
  [Impact]
  fill me in with explanation of severity and frequency of bug on users and 
justification for backporting the fix to the stable release

  [Development Fix]
  fill me in with an explanation of how the bug has been addressed in the 
development branch, including the relevant version numbers of packages modified 
in order to implement the fix. 

  [Stable Fix]
  fill me in by pointing out a minimal patch applicable to the stable version 
of the package.

  [Text Case]
  fill me in with detailed *instructions* on how to reproduce the bug.  This 
will be used by people later on to verify the updated package fixes the 
problem.
  1.
  2.
  3.
  Broken Behavior: 
  Fixed Behavior: 

  [Regression Potential]
  fill me in with a discussion of likelihood and potential severity of 
regressions and how users could get inadvertently affected. 

  [Original Report]
  I saw this once before when I was less confident I knew how to run nova. I 
cannot reliably reproduce this problem. Here is the contents of the directory 
supposedly containng the missing file:

  root@xg11eth0:~# ls -l /var/lib/nova/instances/instance-0001/
  total 82492
  prw-rw 1 root root0 2011-10-21 16:21 console.fifo.in
  prw-rw 1 root root0 2011-10-21 16:22 console.fifo.out
  -rw-rw-r-- 1 nova nova65545 2011-10-21 16:22 console.ring
  -rw-r--r-- 1 root root 75497472 2011-10-21 16:26 disk
  -rw-r--r-- 1 root root 12582912 2011-10-21 16:22 disk.local
  -rw-rw-r-- 1 root root  4732048 2011-10-21 16:21 kernel
  -rw-rw-r-- 1 nova nova 1750 2011-10-21 16:31 libvirt.xml

  This was running from packages using oneiric-proposed

  2011-10-21 16:31:54,147 DEBUG nova.utils [-] Running cmd (subprocess): mkdir 
-p /var/lib/nova/instances/instance-0001/ from (pid=1185) execute 
/usr/lib/python2.7/dist-package\
  s/nova/utils.py:168
  2011-10-21 16:31:54,153 INFO nova.virt.libvirt_conn [-] instance 
instance-0001: Creating image
  2011-10-21 16:31:54,153 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
chown 107 /var/lib/nova/instances/instance-0001/console.fifo from 
(pid=1185) execute /usr/lib/pyth\
  on2.7/dist-packages/nova/utils.py:168
  2011-10-21 16:31:54,173 DEBUG nova.utils [-] Result was 1 from (pid=1185) 
execute /usr/lib/python2.7/dist-packages/nova/utils.py:183
  2011-10-21 16:31:54,174 ERROR nova.exception [-] Uncaught exception
  (nova.exception): TRACE: Traceback (most recent call last):
  (nova.exception): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 98, in wrapped
  (nova.exception): TRACE: return f(*args, **kw)
  (nova.exception): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 676, 
in spawn
  (nova.exception): TRACE: block_device_info=block_device_info)
  (nova.exception): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 928, 
in _create_image
  (nova.exception): TRACE: run_as_root=True)
  (nova.exception): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 191, in execute
  (nova.exception): TRACE: cmd=' '.join(cmd))
  (nova.exception): TRACE: ProcessExecutionError: Unexpected error while 
running command.
  (nova.exception): TRACE: Command: sudo chown 107 
/var/lib/nova/instances/instance-0001/console.fifo
  (nova.exception): TRACE: Exit code: 1
  (nova.exception): TRACE: Stdout: ''
  (nova.exception): TRACE: Stderr: chown: cannot access 
`/var/lib/nova/instances/instance-0001/console.fifo': No such file or 
directory\n
  (nova.exception): TRACE:
  2011-10-21 16:31:54,192 ERROR nova.compute.manager [-] Instance '1' failed to 
spawn. Is virtualization enabled in the BIOS? Details: Unexpected error while 
running command.
  Command: sudo chown 107 /var/lib/nova/instances/instance-0001/console.fifo
  Exit code: 1
  Stdout: ''
  Stderr: chown: cannot access 
`/var/lib/nova/instances/instance-0001/console.fifo': No such file or 
directory\n
  (nova.compute.manager): TRACE: Traceback (most recent call last):
  (nova.compute.manager): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 424, in 
_run_instance
  (nova.compute.manager): TRACE: network_info, block_device_info)
  (nova.compute.manager): TRACE:   File 

[Yahoo-eng-team] [Bug 1398748] [NEW] MySQL engine doesn't specified at a table s with foreign key constraint.

2014-12-03 Thread Ilya Pekelny
Public bug reported:

By default the CI uses MyISAM MySQL engine which doesn't support foreign
key constraint. Thus we must explicitly sign MySQL  engine as InnoDB at
every table declares foreign keys or which is a fk target. Otherwise
foreign keys can't be declared.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1398748

Title:
  MySQL engine doesn't specified at a table s with foreign key
  constraint.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  By default the CI uses MyISAM MySQL engine which doesn't support
  foreign key constraint. Thus we must explicitly sign MySQL  engine as
  InnoDB at  every table declares foreign keys or which is a fk target.
  Otherwise foreign keys can't be declared.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1398748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1066845] Re: nova-novncproxy is not running; Suggest: novnc should be Depends

2014-12-03 Thread Rolf Leggewie
quantal has seen the end of its life and is no longer receiving any
updates. Marking the quantal task for this ticket as Won't Fix.

** Changed in: nova (Ubuntu Quantal)
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1066845

Title:
  nova-novncproxy is not running; Suggest: novnc should be Depends

Status in OpenStack Compute (Nova):
  Invalid
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Quantal:
  Won't Fix
Status in nova source package in Raring:
  Fix Released

Bug description:
  When I want to start novnc on controller node I got the following message:
  nova-novncproxy is not running

  I'm using OpenStack Folsom with Quantum and OpenVswitch plugin. (I'm using 
the packages in Folsom Trunk Testing)
  I've a network node that's running:
  - quantum-plugin-openvswitch-agent
  - quantum-l3-agent
  - quantum-dhcp-agent

  Cloud controller runs:
  - rabbitmq
  - nova-api
  - nova-scheduler
  - nova-keystone
  - nova-consoleauth
  - nova-console
  - quantum-server

  
  Compute node runs
  - nova-compute-qemu
  - quantum-plugin-openvswitch-agent


  
  nova.conf controller

  [DEFAULT]
  # LOGS/STATE
  #root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
  rootwrap_config=/etc/nova/rootwrap.conf
  verbose=True
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  # AUTHENTICATION
  auth_strategy=keystone
  # SCHEDULER
  compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

  # VOLUMES
  volume_group=nova-volumes
  volume_name_template=volume-%08x
  iscsi_helper=tgtadm

  # DATABASE
  sql_connection=mysql://nova:pass@controllerIP/nova

  #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver

  # COMPUTE
  libvirt_type=qemu
  compute_driver=libvirt.LibvirtDriver
  #instance_name_template=instance-%08x
  api_paste_config=/etc/nova/api-paste.ini
  allow_resize_to_same_host=True

  # APIS
  
#osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
  #Bec2_dmz_host= controllerIP
  #s3_host= controllerIP

  # NOVNC CONSOLE
  novnc_enabled=true 
  novncproxy_base_url=http://controllerIP:6080/vnc_auto.html
  # Change vncserver_proxyclient_address and vncserver_listen to match each
  #compute host
  #vncserver_proxyclient_address=
  #vncserver_listen=

  # RABBITMQ
  rabbit_host= controllerIP
  cc_host= controllerIP
  # GLANCE
  image_service=nova.image.glance.GlanceImageService
  glance_api_servers=controllerIP:9292

  #Quantum
  network_api_class=nova.network.quantumv2.api.API
  quantum_url=http://controllerIP:9696
  quantum_auth_strategy=keystone
  quantum_admin_tenant_name=service
  quantum_admin_username=quantum
  quantum_admin_password=Pass
  quantum_admin_auth_url=http://controllerIP:35357/v2.0

  # needed only for nova-compute
  libvirt_ovs_bridge=br-int
  libvirt_vif_type=ethernet
  libvirt_use_virtio_for_bridges=True
  libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver

  #[keystone_authtoken]
  auth_host = controllerIP
  auth_port = 35357
  auth_protocol = http
  auth_uri = controllerIP:5000/
  admin_tenant_name = service
  admin_user = nova
  admin_password = PAss





  nova.conf file compute-node

  [DEFAULT]
  # LOGS/STATE
  #root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
  rootwrap_config=/etc/nova/rootwrap.conf
  verbose=True
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  # AUTHENTICATION
  auth_strategy=keystone
  # SCHEDULER
  compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

  # VOLUMES
  volume_group=nova-volumes
  volume_name_template=volume-%08x
  iscsi_helper=tgtadm

  # DATABASE
  sql_connection=mysql://nova:pass@controllerIP/nova

  #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver

  # COMPUTE
  libvirt_type=qemu
  compute_driver=libvirt.LibvirtDriver
  #instance_name_template=instance-%08x
  api_paste_config=/etc/nova/api-paste.ini
  allow_resize_to_same_host=True

  # APIS
  
#osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
  #Bec2_dmz_host= controllerIP
  #s3_host= controllerIP

  # NOVNC CONSOLE
  novnc_enabled=true 
  novncproxy_base_url=http://controllerIP:6080/vnc_auto.html
  # Change vncserver_proxyclient_address and vncserver_listen to match each
  #compute host
  #vncserver_proxyclient_address=Compute-nodeIP
  #vncserver_listen=Compute-nodeIP

  # RABBITMQ
  rabbit_host= controllerIP
  cc_host= controllerIP
  # GLANCE
  image_service=nova.image.glance.GlanceImageService
  glance_api_servers=controllerIP:9292

  #Quantum
  network_api_class=nova.network.quantumv2.api.API
  quantum_url=http://controllerIP:9696
  quantum_auth_strategy=keystone
  quantum_admin_tenant_name=service
  quantum_admin_username=quantum
  quantum_admin_password=Pass
  

[Yahoo-eng-team] [Bug 1398754] [NEW] LBaas v1 Associate Monitor to Pool Fails

2014-12-03 Thread Robin Wang
Public bug reported:

Try to associate health monitor to pool in horizon, there's no monitor
listed on page Associate Monitor.

Reproduce Procedure: 
1.  Create Pool 
2.  Add two members
3.  Create Health Monitor
4.  Click Associate Monitor button of pool resource
5.  There's no monitor listed.

***
At this point, use CLI to:
1.  show pool, there's no monitor associated yet.
++--+
| Field  | Value  |
++--+
| health_monitors| |
| health_monitors_status |  |
++--+

2. list monitor, there's available monitor.
$ neutron lb-healthmonitor-list
+--+--++
| id   | type | admin_state_up |
+--+--++
| f5e764f0-eceb-4516-9919-7806f409c1ae | HTTP | True   |
+--+--++

3. Assocaite monitor to pool. Succeeded.
$ neutron lb-healthmonitor-associate  f5e764f0-eceb-4516-9919-7806f409c1ae  
mypool
Associated health monitor f5e764f0-eceb-4516-9919-7806f409c1ae

*

Base on above info, it should be a horizon bug. Thanks.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398754

Title:
  LBaas v1 Associate Monitor to Pool Fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Try to associate health monitor to pool in horizon, there's no monitor
  listed on page Associate Monitor.

  Reproduce Procedure: 
  1.  Create Pool 
  2.  Add two members
  3.  Create Health Monitor
  4.  Click Associate Monitor button of pool resource
  5.  There's no monitor listed.

  ***
  At this point, use CLI to:
  1.  show pool, there's no monitor associated yet.
  ++--+
  | Field  | Value  |
  ++--+
  | health_monitors| |
  | health_monitors_status |  |
  ++--+

  2. list monitor, there's available monitor.
  $ neutron lb-healthmonitor-list
  +--+--++
  | id   | type | admin_state_up |
  +--+--++
  | f5e764f0-eceb-4516-9919-7806f409c1ae | HTTP | True   |
  +--+--++

  3. Assocaite monitor to pool. Succeeded.
  $ neutron lb-healthmonitor-associate  f5e764f0-eceb-4516-9919-7806f409c1ae  
mypool
  Associated health monitor f5e764f0-eceb-4516-9919-7806f409c1ae

  *

  Base on above info, it should be a horizon bug. Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398756] [NEW] Foreign key exists in the oauth1 extension models but doesn't exist in the tables.

2014-12-03 Thread Ilya Pekelny
Public bug reported:

The oauth1 extension models contains a foreign keys which was deleted a
few migrations above.

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- Foreign key exists in the models but doesn't exist in the tables.
+ Foreign key exists in the oauth1 extension models but doesn't exist in the 
tables.

** Description changed:

- The models contains a foreign keys which was deleted a few migrations
- above.
+ The oauth1 extension models contains a foreign keys which was deleted a
+ few migrations above.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1398756

Title:
  Foreign key exists in the oauth1 extension models but doesn't exist in
  the tables.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The oauth1 extension models contains a foreign keys which was deleted
  a few migrations above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1398756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1100920] Re: In Ubuntu 12.10, the legacy 'user' cloud-config option is not handled properly

2014-12-03 Thread Rolf Leggewie
quantal has seen the end of its life and is no longer receiving any
updates. Marking the quantal task for this ticket as Won't Fix.

** Changed in: cloud-init (Ubuntu Quantal)
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1100920

Title:
  In Ubuntu 12.10, the legacy 'user' cloud-config option is not handled
  properly

Status in Init scripts for use on cloud images:
  Fix Released
Status in Orchestration API (Heat):
  Invalid
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Quantal:
  Won't Fix
Status in cloud-init source package in Raring:
  Fix Released

Bug description:
  When trying to use HEAT with Ubuntu 12.10, the following cloud-config
  is used:

  _BEGIN_
  runcmd:
   - setenforce 0  /dev/null 21 || true

  user: ec2-user

  cloud_config_modules:
   - locale
   - set_hostname
   - ssh
   - timezone
   - update_etc_hosts
   - update_hostname
   - runcmd

  # Capture all subprocess output into a logfile
  # Useful for troubleshooting cloud-init issues
  output: {all: '| tee -a /var/log/cloud-init-output.log'}
  _END_---
  This results in ec2-user being created, but no SSH keys for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1100920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398756] Re: Foreign key exists in the oauth1 extension models but doesn't exist in the tables.

2014-12-03 Thread Ilya Pekelny
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1398756

Title:
  Foreign key exists in the oauth1 extension models but doesn't exist in
  the tables.

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  The oauth1 extension models contains a foreign keys which was deleted
  a few migrations above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1398756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398766] [NEW] /tmp is not cleaned after image upload from horizon

2014-12-03 Thread Vlad Okhrimenko
Public bug reported:

Create Images in Horizon (Image Source == Image File ), after successful
it has not been cleaned up from the /tmp. I uploaded two images(229M and
564M ):

vokhrimenko@ubuntu:~$ ls -larth /tmp
total 793M
drwxr-xr-x 22 rootroot4.0K Nov 26 10:36 ..
drwxrwxrwt  2 rootroot4.0K Nov 26 10:50 .X11-unix
drwxrwxrwt  2 rootroot4.0K Nov 26 10:50 .ICE-unix
drwxr-xr-x  2 vokhrimenko vokhrimenko 4.0K Nov 26 10:51 pip_build_vokhrimenko
-rw---  1 vokhrimenko vokhrimenko 229M Nov 28 14:15 tmpZkebNW.upload
-rw---  1 vokhrimenko vokhrimenko 564M Nov 28 14:16 tmply4I4r.upload
drwxrwxrwt  5 rootroot4.0K Nov 28 14:17 .

** Affects: horizon
 Importance: Undecided
 Assignee: Vlad Okhrimenko (vokhrimenko)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Vlad Okhrimenko (vokhrimenko)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398766

Title:
   /tmp is not cleaned after image upload from horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Create Images in Horizon (Image Source == Image File ), after
  successful  it has not been cleaned up from the /tmp. I uploaded two
  images(229M and 564M ):

  vokhrimenko@ubuntu:~$ ls -larth /tmp
  total 793M
  drwxr-xr-x 22 rootroot4.0K Nov 26 10:36 ..
  drwxrwxrwt  2 rootroot4.0K Nov 26 10:50 .X11-unix
  drwxrwxrwt  2 rootroot4.0K Nov 26 10:50 .ICE-unix
  drwxr-xr-x  2 vokhrimenko vokhrimenko 4.0K Nov 26 10:51 pip_build_vokhrimenko
  -rw---  1 vokhrimenko vokhrimenko 229M Nov 28 14:15 tmpZkebNW.upload
  -rw---  1 vokhrimenko vokhrimenko 564M Nov 28 14:16 tmply4I4r.upload
  drwxrwxrwt  5 rootroot4.0K Nov 28 14:17 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398768] [NEW] If we set a gateway outside subnet neutron-l3-agent fails when trying to set the route in the virtual router.

2014-12-03 Thread Miguel Angel Ajo
Public bug reported:

014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Traceback (most recent 
call last):
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 341, in call
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent return 
func(*args, **kwargs)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 938, in 
process_router
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
self.external_gateway_added(ri, ex_gw_port, interface_name)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 1318, in 
external_gateway_added
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent ri.ns_name, 
preserve_ips)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 1362, in 
_external_gateway_added
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
preserve_ips=preserve_ips)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py, line 120, 
in init_l3
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
device.route.add_gateway(gateway)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 395, in 
add_gateway
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
self._as_root(*args)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 242, in 
_as_root
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
kwargs.get('use_root_namespace', False))
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 74, in 
_as_root
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
log_fail_as_error=self.log_fail_as_error)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 86, in 
_execute
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
log_fail_as_error=log_fail_as_error)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 84, in 
execute
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent RuntimeError:
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-bcfe22ff-049c-4eb9-9b57-235d903ee52f', 'ip', 'route', 'replace', 
'default', 'via', '37.187.128.254', 'dev', 'qg-eaa2de59-95']
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Exit code: 2
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Stdout: ''
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Stderr: 'RTNETLINK 
answers: Network is unreachable\n'
2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent


This happens because we should set an on-link route to the gateway IP
on the external network interface first.

Once this works, we can remove the force_gateway_on_subnet deprecation,
to allow this not to be enforced, as it's a valid use case in many data centers.

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Miguel Angel Ajo (mangelajo)

** Description changed:

+ 014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 341, in call
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent return 
func(*args, **kwargs)
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 938, in 
process_router
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
self.external_gateway_added(ri, ex_gw_port, interface_name)
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 1318, in 
external_gateway_added
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent ri.ns_name, 
preserve_ips)
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3_agent.py, line 1362, in 
_external_gateway_added
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent 
preserve_ips=preserve_ips)
+ 2014-11-30 16:25:10.113 8086 TRACE neutron.agent.l3_agent   File 

[Yahoo-eng-team] [Bug 1398779] [NEW] radvd = 2.0 blocks router update processing

2014-12-03 Thread Ihar Hrachyshka
Public bug reported:

In radvd 2.0+, daemonization code was rewritten, switching from
libdaemon's daemon_fork() to Linux daemon() call.

If no logging method (-m option) is passed to radvd, and the default
logging method is used (which is L_STDERR_SYSLOG), then daemon() is
called with (1, 1) arguments, meaning no chroot (fine) and not closing
stderr (left there for logging) (not fine). So execute() call that
spawns radvd and expects it to daemonize and return code never actually
completes, blocked on stderr.

The fix is to pass e.g. -m syslog to radvd to make it close stderr and
return.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398779

Title:
  radvd = 2.0 blocks router update processing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In radvd 2.0+, daemonization code was rewritten, switching from
  libdaemon's daemon_fork() to Linux daemon() call.

  If no logging method (-m option) is passed to radvd, and the default
  logging method is used (which is L_STDERR_SYSLOG), then daemon() is
  called with (1, 1) arguments, meaning no chroot (fine) and not closing
  stderr (left there for logging) (not fine). So execute() call that
  spawns radvd and expects it to daemonize and return code never
  actually completes, blocked on stderr.

  The fix is to pass e.g. -m syslog to radvd to make it close stderr and
  return.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396689] Re: Nova ignores configs, uses wrong driver/protocol for AMQP from Havana - Icehouse upgrade

2014-12-03 Thread Alan Pevec
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396689

Title:
  Nova ignores configs, uses wrong driver/protocol for AMQP from Havana
  - Icehouse upgrade

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This is related to an upgrade from Havana - Icehouse.

  It seems that nova-compute is passing the wrong protocol header to rabbitmq 
  which results in nova compute services never starting up, this is despite the 
fact that the nova.conf
  driver for messaging is:

 rpc_backend=nova.openstack.common.rpc.impl_kombu

  nova-compute logs the following and never starts up:

 2014-11-26 14:15:12.817 8585 ERROR oslo.messaging._drivers.impl_qpid
 [-] Unable to connect to AMQP server: client: 0-10, server: 9-1.
 Sleeping 5 seconds

  This implies that the wrong driver is being used (hence the protocol
  mismatch causing the connection termination?)

  We see that the client (compute) wants to send protocol id major 1, protocol
  id minor 1, version major 0, version minor 10, which rabbitmq doesn't
  like (controller).

  =ERROR REPORT 26-Nov-2014::01:32:33 ===
  closing AMQP connection 0.2090.0 (192.168.0.7:50588 -
  192.168.0.222:5672)
  {bad_version,{1,1,0,10}}

  The configs are clean, and not telling it to use qpid so we believe
  there may be some protocol field buried in the DB somewhere that it's
  referencing still in place from Havana/QPID.

  We believe that somewhere in the DB it's referencing some kind of protocol 
field
  and it's being used regardless of what is in the config files or the driver 
it's using.

  Of note, it also references the old controller IP address which is not
  in the configs so we've brought up a virtual interface with that old IP
  (192.168.0.222) to get past it for now.

  Attached is a TCPDUMP of the activity.
  We have been able to run sucessful db_sync for nova, neutron, keystone, 
glance, cinder
  during the upgrade however this is blocking nova from starting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1396689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397218] Re: NoVNC can't work with error:No child processes

2014-12-03 Thread Joe Gordon
This doesn't sound like a nova bug per se, this may be a packaging
issue?

https://github.com/kanaka/websockify/issues/101

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397218

Title:
  NoVNC can't work with error:No child processes

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Using Juno nova, and the NoVNC service cloudn't work.  There is the
  error log:

  2014-11-28T11:32:01+00:00 localhost nova-novncproxy DEBUG [pid:28971]
  [MainThread] [websocket.py:824 vmsg] exception Traceback (most recent
  call last):   File /usr/lib64/python2.6/site-
  packages/websockify/websocket.py, line 939, in start_server
  child_count = len(multiprocessing.active_children())   File
  /usr/lib64/python2.6/multiprocessing/process.py, line 43, in
  active_children _cleanup()   File
  /usr/lib64/python2.6/multiprocessing/process.py, line 53, in
  _cleanup if p._popen.poll() is not None:   File
  /usr/lib64/python2.6/multiprocessing/forking.py, line 106, in poll
  pid, sts = os.waitpid(self.pid, flag) OSError: [Errno 10] No child
  processes

  The version of websocket is python-websockify-0.5.1-2.2 in my suse
  environment.

  There is any idea about this issue?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397794] Re: Delete instance failed, the task_state stay on deleting and can not revert to none, because the InvalidBDMVolume exception is not handled.

2014-12-03 Thread Joe Gordon
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = Confirmed

** Changed in: nova
   Status: New = Incomplete

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397794

Title:
  Delete instance failed, the task_state stay on deleting  and can not
  revert to none,  because the InvalidBDMVolume exception is not
  handled.

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) icehouse series:
  Confirmed

Bug description:
  1. Delete instance failed, the task_state stay on deleting  and can
  not revert to none.

  [root@opencos_114_222 ~(keystone_admin)]# nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | bc177a3d-edf8-4d24-b991-f945a2a44389 | 1| ERROR  | deleting   | Running 
| net1=192.168.0.3 |
  
+--+--+++-+--+

  2. The log is as follows:
  2014-12-01 18:43:26.319 3817 INFO nova.compute.manager [-] [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] Service started deleting the instance 
during the previous run, but did not finish. Restarting the deletion now.
  2014-12-01 18:43:26.596 3817 AUDIT nova.compute.manager 
[req-dc183820-078a-4098-a7a2-50bbd598e90d None None] [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] Terminating instance
  2014-12-01 18:43:26.619 3817 INFO nova.virt.libvirt.driver [-] [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] Instance destroyed successfully.
  2014-12-01 18:43:27.773 3817 ERROR nova.compute.manager [-] [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] Failed to complete a deletion
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] Traceback (most recent call last):
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 886, in 
_init_instance
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] self._delete_instance(context, 
instance, bdms, quotas)
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/hooks.py, line 103, in inner
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] rv = f(*args, **kwargs)
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2312, in 
_delete_instance
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] quotas.rollback()
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] six.reraise(self.type_, self.value, 
self.tb)
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2284, in 
_delete_instance
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] self._shutdown_instance(context, 
db_inst, bdms)
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line , in 
_shutdown_instance
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] requested_networks)
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389] six.reraise(self.type_, self.value, 
self.tb)
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 
bc177a3d-edf8-4d24-b991-f945a2a44389]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2212, in 
_shutdown_instance
  2014-12-01 18:43:27.773 3817 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1318721] Re: RPC timeout in all neutron agents

2014-12-03 Thread Mehdi Abaakouk
Fixes by: https://review.openstack.org/#/c/103157/

** Changed in: neutron
   Status: In Progress = Invalid

** Changed in: oslo.messaging
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) = mouad.benchchaoui 
(mouad-benchchaoui)

** Changed in: neutron
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) = (unassigned)

** Changed in: oslo.messaging
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1318721

Title:
  RPC timeout in all neutron agents

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  In the logs the first traceback that happen is this:

  [-] Unexpected exception occurred 1 time(s)... retrying.
  Traceback (most recent call last):
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/excutils.py,
 line 62, in inner_func
  return infunc(*args, **kwargs)
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 741, in _consumer_thread
   
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 732, in consume
  @excutils.forever_retry_uncaught_exceptions
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 660, in iterconsume
  try:
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 590, in ensure
  def close(self):
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 531, in reconnect
  # to return an error not covered by its transport
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 513, in _connect
  Will retry up to self.max_retries number of times.
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py,
 line 150, in reconnect
  use the callback passed during __init__()
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py, 
line 508, in declare
  self.queue_bind(nowait)
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py, 
line 541, in queue_bind
  self.binding_arguments, nowait=nowait)
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py, 
line 551, in bind_to
  nowait=nowait)
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py, 
line 1003, in queue_bind
  (50, 21),  # Channel.queue_bind_ok
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py,
 line 68, in wait
  return self.dispatch_method(method_sig, args, content)
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py,
 line 86, in dispatch_method
  return amqp_method(self, args)
File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py, 
line 241, in _close
  reply_code, reply_text, (class_id, method_id), ChannelError,
  NotFound: Queue.bind: (404) NOT_FOUND - no exchange 
'reply_8f19344531b448c89d412ee97ff11e79' in vhost '/'

  
  Than an RPC Timeout is raised each second in all the agents 

  ERROR neutron.agent.l3_agent [-] Failed synchronizing routers 
  TRACE neutron.agent.l3_agent Traceback (most recent call last):
  TRACE neutron.agent.l3_agent   File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 702, in _rpc_loop
  TRACE neutron.agent.l3_agent self.context, router_ids)
  TRACE neutron.agent.l3_agent   File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 79, in get_routers
  TRACE neutron.agent.l3_agent topic=self.topic)
  TRACE neutron.agent.l3_agent   File 
/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/proxy.py,
 line 130, in call
  TRACE neutron.agent.l3_agent exc.info, real_topic, msg.get('method'))
  TRACE neutron.agent.l3_agent Timeout: Timeout while waiting on RPC response - 
topic: q-l3-plugin, RPC method: sync_routers info: unknown

  This actually make the agent useless until they are all restarted.

  An analyze of what's going on coming soon :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1318721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396456] Re: instance boot failed when restart host

2014-12-03 Thread Joe Gordon
Do you have more detail? How can we reproduce this bug?

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Incomplete

** Changed in: nova/icehouse
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396456

Title:
  instance boot failed when restart host

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Compute (nova) icehouse series:
  Incomplete

Bug description:
  When I reboot host, state of one instance became ERROR with CLI(nova
  list).


  The log of the nova-compute service:
  2014-11-25 12:13:48.095 3848 DEBUG nova.compute.manager [-] [instance: 
7d7ec3f2-3709-4bbc-b278-849fd672d284] Current state is 4, state in DB is 1. 
_init_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:961

  But:
  2014-11-25 12:14:48.249 3848 ERROR nova.virt.libvirt.driver [-] An error 
occurred while trying to launch a defined domain with xml: domain type='kvm'
nameinstance-0006/name
uuid7d7ec3f2-3709-4bbc-b278-849fd672d284/uuid
.

  Check the log of libvirt:
  2014-11-25 04:14:18.997+: 2545: error : qemuMonitorOpenUnix:313 : monitor 
socket did not show up: No such file or directory

  Use stable-icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1396456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398826] [NEW] xenapi: upload failures causing image to go active

2014-12-03 Thread John Garbutt
Public bug reported:

There was an attempt to stop some glance errors when we have upload failures 
here:
https://github.com/openstack/nova/commit/e039b036b5e9dbaff8b37f7ab22c209b71bdc182

However, sending the chunk terminator makes glance thing that the failed
upload has completed.

We need to make sure when the upload fails, glance puts the image into
the failed state, not the active state.

** Affects: nova
 Importance: Medium
 Assignee: John Garbutt (johngarbutt)
 Status: In Progress


** Tags: xenserver

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: (unassigned) = John Garbutt (johngarbutt)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398826

Title:
  xenapi: upload failures causing image to go active

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There was an attempt to stop some glance errors when we have upload failures 
here:
  
https://github.com/openstack/nova/commit/e039b036b5e9dbaff8b37f7ab22c209b71bdc182

  However, sending the chunk terminator makes glance thing that the
  failed upload has completed.

  We need to make sure when the upload fails, glance puts the image into
  the failed state, not the active state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391694] Re: Warning message about missing policy.d folder during Sahara start

2014-12-03 Thread Davanum Srinivas (DIMS)
** Also affects: oslo-incubator
   Importance: Undecided
   Status: New

** Changed in: oslo-incubator
   Status: New = Fix Committed

** Changed in: oslo-incubator
   Importance: Undecided = Low

** Changed in: oslo-incubator
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391694

Title:
  Warning message about missing policy.d folder during Sahara start

Status in OpenStack Compute (Nova):
  Confirmed
Status in The Oslo library incubator:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Confirmed

Bug description:
  2014-11-11 16:14:05.786 403 WARNING sahara.openstack.common.policy [-]
  Can not find policy directories policy.d

  Example: https://sahara.mirantis.com/logs/31/133131/2/check/gate-
  sahara-integration-vanilla-1/9ca6d41/console.html

  Policy library from oslo searches for policy in directories specified
  by 'policy_dirs' parameter and warns if directory doesn't exist.
  Default value is ['policy.d'].

  Need to check what other projects do about this. I have never seen
  such warnings in other openstack projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397489] Re: VM boot failure since nova to neutron port notification fails

2014-12-03 Thread Joe Gordon
It looks like this is a neutron or keystone failure of some sort. As
neutron is saying

 Max retries exceeded with url: /v2.0/tokens (Caused by class
'socket.gaierror': [Errno -9] Address family for hostname not
supported) and tokens is from keystone. So I don't think this is a nova
bug. What is your localrc file for devstack so that we can reproduce the
bug.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397489

Title:
  VM boot failure  since nova to neutron port notification fails

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I run the latest devstack and use nova boot to create a VM,but it
  failed.

  Nova show the VM,it show:message: Build of instance cb509a04-ca8a-
  491f-baf1-be01b15f4946 aborted: Failed to allocate the network(s), not
  rescheduling., code: 500, details:   File
  \/opt/stack/nova/nova/compute/manager.py\, line 2030, in
  _do_build_and_run_instance

  and the following error in the nova-compute.log:

  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1714, 
in _spawn
  block_device_info)
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2266, in spawn
  block_device_info)
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3681, in _create_domain_and_network
  raise exception.VirtualInterfaceCreateException()
  VirtualInterfaceCreateException: Virtual Interface creation failed

  Adding vif_plugging_is_fatal = False and vif_plugging_timeout = 5
  to the compute nodes stops the missing message from being fatal and
  guests can then be spawned normally and accessed over the network.

  In the bug: https://bugs.launchpad.net/nova/+bug/1348103 says it
  happened in cells environment,but it doesn't happen only in cells
  environment.This problem should arouse our attention more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398839] [NEW] Avoid unnecessary explicit str() conversion around exceptions

2014-12-03 Thread Ann Kamyshnikova
Public bug reported:

There are number of places like

except Exception as exc:
LOG.error(Failed to get network: %s, str(exc))

where str() is not needed since %s substitution already does the same
conversion.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398839

Title:
  Avoid unnecessary explicit str() conversion around exceptions

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  There are number of places like

  except Exception as exc:
  LOG.error(Failed to get network: %s, str(exc))

  where str() is not needed since %s substitution already does the same
  conversion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295443] Re: Handle unicode exception messages

2014-12-03 Thread Ann Kamyshnikova
I've analyzed codebase and came to conclusion that this bug is invalid for 
Neutron. Places where we have str() conversion around exceptions divide into 
two categories:
- where stringified exceptions are checked for equiality with or inclusion of 
literal strings which are of str type in both Python 2.x and 3.x, converting 
them to unicode will only add one more implicit unicode-str conversion step;
- where exceptions are immediately passed as arguments to string formatting 
operator, in these cases explicit str() conversion is not needed since %s 
substitution already does the same conversion.
Both of these cases need no special unicode handling. For the second case I've 
filed a bug here:https://bugs.launchpad.net/neutron/+bug/1398839

** Changed in: neutron
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295443

Title:
  Handle unicode exception messages

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Python client library for Neutron:
  In Progress

Bug description:
  There are many places in the code that explicitly ask for byte code
  strings while using exception messages. They should be able to work
  with unicode string and use it in a way that is PY3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1295443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398845] [NEW] RBAC not preventing a creation of subnet via creation of new network

2014-12-03 Thread Roey Dekel
Public bug reported:

Changing create_subnet to role:admin in neutron_policy.json is
not preventing from non-admin user the creation of a new subnet while
creating new network (new network button)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: rbac

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398845

Title:
  RBAC not preventing a creation of subnet via creation of new network

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Changing create_subnet to role:admin in neutron_policy.json is
  not preventing from non-admin user the creation of a new subnet while
  creating new network (new network button)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393633] Re: test_postgresql_opportunistically fails with database openstack_citest is being accessed by other users

2014-12-03 Thread Victor Sergeyev
** Changed in: oslo.db
   Status: Fix Committed = Fix Released

** Changed in: oslo.db
Milestone: next-juno = 1.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393633

Title:
  test_postgresql_opportunistically fails with database
  openstack_citest is being accessed by other users

Status in OpenStack Compute (Nova):
  Fix Committed
Status in Oslo Database library:
  Fix Released

Bug description:
  Looks like this was previously fixed under bug 1328997 but this is
  back:

  http://logs.openstack.org/72/135072/1/check/gate-nova-
  python27/ba44ca9/console.html#_2014-11-17_22_51_24_244

  2014-11-17 22:51:24.244 | Captured traceback:
  2014-11-17 22:51:24.244 | ~~~
  2014-11-17 22:51:24.244 | Traceback (most recent call last):
  2014-11-17 22:51:24.244 |   File nova/tests/unit/db/test_migrations.py, 
line 138, in test_postgresql_opportunistically
  2014-11-17 22:51:24.245 | self._test_postgresql_opportunistically()
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 429, in _test_postgresql_opportunistically
  2014-11-17 22:51:24.245 | self._reset_database(database)
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 336, in _reset_database
  2014-11-17 22:51:24.245 | self._reset_pg(conn_pieces)
  2014-11-17 22:51:24.245 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/concurrency/lockutils.py,
 line 311, in inner
  2014-11-17 22:51:24.245 | return f(*args, **kwargs)
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 245, in _reset_pg
  2014-11-17 22:51:24.245 | self.execute_cmd(droptable)
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 228, in execute_cmd
  2014-11-17 22:51:24.245 | Failed to run: %s\n%s % (cmd, output))
  2014-11-17 22:51:24.246 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 348, in assertEqual
  2014-11-17 22:51:24.246 | self.assertThat(observed, matcher, message)
  2014-11-17 22:51:24.246 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  2014-11-17 22:51:24.246 | raise mismatch_error
  2014-11-17 22:51:24.246 | MismatchError: !=:
  2014-11-17 22:51:24.246 | reference = ''
  2014-11-17 22:51:24.246 | actual= u'''\
  2014-11-17 22:51:24.246 | Unexpected error while running command.
  2014-11-17 22:51:24.246 | Command: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
  2014-11-17 22:51:24.246 | Exit code: 1
  2014-11-17 22:51:24.246 | Stdout: u''
  2014-11-17 22:51:24.247 | Stderr: u'ERROR:  database openstack_citest 
is being accessed by other users\\nDETAIL:  There is 1 other session using the 
database.\\n\
  2014-11-17 22:51:24.247 | : Failed to run: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
  2014-11-17 22:51:24.247 | Unexpected error while running command.
  2014-11-17 22:51:24.247 | Command: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
  2014-11-17 22:51:24.247 | Exit code: 1
  2014-11-17 22:51:24.247 | Stdout: u''
  2014-11-17 22:51:24.247 | Stderr: u'ERROR:  database openstack_citest 
is being accessed by other users\nDETAIL:  There is 1 other session using the 
database.\n'
  2014-11-17 22:51:24.247 | Traceback (most recent call last):
  2014-11-17 22:51:24.247 | _StringException: Empty attachments:
  2014-11-17 22:51:24.247 |   pythonlogging:''
  2014-11-17 22:51:24.247 |   stderr
  2014-11-17 22:51:24.248 |   stdout

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ29tbWFuZDogcHNxbCAtdyAtVSBvcGVuc3RhY2tfY2l0ZXN0IC1oIGxvY2FsaG9zdCAtYyAnZHJvcCBkYXRhYmFzZSBpZiBleGlzdHMgb3BlbnN0YWNrX2NpdGVzdDsnIC1kIHBvc3RncmVzXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQxNjI3NTg1MDI4MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  516 hits in 7 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340702] Re: Missing of instance action record for start of live migration

2014-12-03 Thread Joe Gordon
https://review.openstack.org/#/c/95440/ merged

** Changed in: nova
   Status: In Progress = Fix Committed

** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340702

Title:
  Missing  of instance action record for start of live migration

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova has capability of recording instance actions like start/stop
  ,migrate, etc. use command 'nova instance-action-list', we can check
  one instance's operation history.

  $nova help instance-action-list
  usage: nova instance-action-list server

  List actions on a server.

  Positional arguments:
    server  Name or UUID of the server to list actions for.

  [root@node-1 nova]# nova stop gcb
  [root@node-1 nova]# nova start gcb
  [root@node-1 nova]# nova instance-action-list gcb
  
++--+-++
  | Action | Request_ID   | Message | Start_Time
 |
  
++--+-++
  | create | req-97994c65-de56-4272-a985-bb061beb389d | -   | 
2014-07-15T11:36:54.00 |
  | stop   | req-10e7519c-f82c-4d30-a193-c771e3046a59 | -   | 
2014-07-16T01:45:34.00 |
  | start  | req-c2083909-0a79-4ee7-b064-e6bc05eb52b7 | -   | 
2014-07-16T02:29:48.00 |
  
++--+-++

  
   We also need record live migrating action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388077] Re: Parallel periodic instance power state reporting from compute nodes has high impact on conductors and message broker

2014-12-03 Thread James Page
** Also affects: nova (Ubuntu Vivid)
   Importance: Undecided
   Status: In Progress

** Also affects: nova (Ubuntu Utopic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388077

Title:
  Parallel periodic instance power state reporting from compute nodes
  has high impact on conductors and message broker

Status in OpenStack Compute (Nova):
  In Progress
Status in nova package in Ubuntu:
  In Progress
Status in nova source package in Utopic:
  New
Status in nova source package in Vivid:
  In Progress

Bug description:
  Environment: OpenStack Juno release/Ubuntu 14.04/480 compute nodes/8
  cloud controllers/40,000 instances +

  The change made in:

  
https://github.com/openstack/nova/commit/baabab45e0ae0e9e35872cae77eb04bdb5ee0545

  switches power state reporting from being a serial process for each
  instance on a hypervisor to being a parallel thread for every
  instance; for clouds running high instance counts, this has quite an
  impact on the conductor processes as they try to deal with N instance
  refresh calls in parallel where N is the number of instances running
  on the cloud.

  It might be better to throttle this to a configurable parallel level
  so that period RPC load can be managed effectively in a larger cloud,
  or to continue todo this process in series but outside of the main
  thread.

  The net result of this activity is that it places increase demands on
  the message broker, which has to deal with more parallel connections,
  and the conductors as they try to consume all of the RPC requests; if
  the message broker hits its memory high water mark it will stop
  publishers publishing any more messages until the memory usage drops
  below the high water mark again - this might not be achievable if all
  conductor processes are tied up with existing RPC calls try to send
  replies, resulting in a message broker lockup and collapse of all RPC
  in the cloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398865] [NEW] DVR code can add duplicate routing rules

2014-12-03 Thread Brian Haley
Public bug reported:

The IPRule code in ip_lib.py doesn't check if a rule already exists, so
it could add a duplicate on agent restart.  For example:

$ sudo ip netns exec qrouter-46460e86-ef11-46fb-8d27-da94435dfcc9 ip rule show
0:  from all lookup local 
32766:  from all lookup main 
32767:  from all lookup default 
167772161:  from 10.0.0.1/24 lookup 167772161 
167772161:  from 10.0.0.1/24 lookup 167772161

It should check first and not add anything if one is already there as
there is no 'replace' option like the routing table has (which will
either update or add).

DVR is currently the only consumer of this code.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398865

Title:
  DVR code can add duplicate routing rules

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The IPRule code in ip_lib.py doesn't check if a rule already exists,
  so it could add a duplicate on agent restart.  For example:

  $ sudo ip netns exec qrouter-46460e86-ef11-46fb-8d27-da94435dfcc9 ip rule show
  0:from all lookup local 
  32766:from all lookup main 
  32767:from all lookup default 
  167772161:from 10.0.0.1/24 lookup 167772161 
  167772161:from 10.0.0.1/24 lookup 167772161

  It should check first and not add anything if one is already there as
  there is no 'replace' option like the routing table has (which will
  either update or add).

  DVR is currently the only consumer of this code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395236] Re: nova compute

2014-12-03 Thread Joe Gordon
this sounds like a user error and not a nova bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1395236

Title:
  nova compute

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  My /etc/nova/nova.conf file
  [root@hd-slave nova]# cat /etc/nova/nova.conf |grep ^[a-z]
  rabbit_host=192.168.88.100
  rabbit_port=5672
  rabbit_hosts=$rabbit_host:$rabbit_port
  rabbit_use_ssl=false
  rabbit_userid=guest
  rabbit_password=guest
  rabbit_virtual_host=/
  rpc_backend=rabbit
  my_ip=192.168.88.100
  state_path=/var/lib/nova
  auth_strategy=keystone
  instances_path=$state_path/instances
  network_api_class=nova.network.neutronv2.api.API
  linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
  neutron_url=http://192.168.88.100:9696
  neutron_admin_username=admin
  neutron_admin_password=admin
  neutron_admin_tenant_id=ae36404bda69434ebb8edda06e2596d3
  neutron_admin_tenant_name=admin
  neutron_admin_auth_url=http://192.168.88.100:5000/v2.0
  neutron_auth_strategy=keystone
  security_group_api=neutron
  lock_path=/var/lib/nova/tmp
  debug=true
  verbose=true
  resize_fs_using_block_device=false
  compute_driver=libvirt.LibvirtDriver
  use_cow_images=false
  vif_plugging_is_fatal=false
  vif_plugging_timeout=10
  firewall_driver=nova.virt.libvirt.firewall.NoopFirewallDriver
  novncproxy_base_url=http://192.168.88.100:6080/vnc_auto.html
  vncserver_listen=0.0.0.0
  vncserver_proxyclient_address=192.168.88.101
  vnc_enabled=true
  vnc_keymap=en-us
  volume_api_class=nova.volume.cinder.API
  connection=mysql://nova:nova@192.168.88.100/nova
  auth_host=192.168.88.100
  auth_port=35357
  auth_protocol=http
  auth_uri=http://192.168.88.100:5000
  auth_version=v2.0
  admin_user=admin
  admin_password=admin
  admin_tenant_name=admin
  virt_type=qemu
  vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver

  
  Error Information:

  [root@hd-slave nova]# nova-compute --config-file /etc/nova/nova.conf 
  2014-11-22 09:58:41.976 3701 DEBUG nova.servicegroup.api [-] ServiceGroup 
driver defined as an instance of db __new__ 
/usr/lib/python2.6/site-packages/nova/servicegroup/api.py:65
  2014-11-22 09:58:42.204 3701 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
  2014-11-22 09:58:42.250 3701 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.6/site-packages/stevedore/extension.py:156
  2014-11-22 09:58:42.266 3701 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.6/site-packages/stevedore/extension.py:156
  2014-11-22 09:58:42.269 3701 INFO nova.virt.driver [-] Loading compute driver 
'libvirt.LibvirtDriver'
  2014-11-22 09:58:42.283 3701 ERROR nova.virt.driver [-] Unable to load the 
virtualization driver
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver Traceback (most recent 
call last):
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver   File 
/usr/lib/python2.6/site-packages/nova/virt/driver.py, line 1299, in 
load_compute_driver
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver virtapi)
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
52, in import_object_ns
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver return 
import_class(import_str)(*args, **kwargs)
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
33, in import_class
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver 
traceback.format_exception(*sys.exc_info(
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver ImportError: Class 
LibvirtDriver cannot be found (['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py, line 
29, in import_class\nreturn getattr(sys.modules[mod_str], class_str)\n', 
AttributeError: 'module' object has no attribute 'LibvirtDriver'\n])
  2014-11-22 09:58:42.283 3701 TRACE nova.virt.driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1395236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally on libvirt

2014-12-03 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/985489

Title:
  nova-compute stops processing compute.$HOSTNAME occasionally on
  libvirt

Status in OpenStack Compute (Nova):
  Invalid
Status in nova package in Ubuntu:
  Confirmed

Bug description:
root@novamanager:~# /usr/sbin/rabbitmqctl list_queues  | awk '$1 ~ 
/^compute/  $2 != 0 { print }'
compute.nodexyzzy   12

  Occasionally on canonistack, we find that a compute node simply stops
  processing its rabbit queues.  A check of the logs will show no nova-
  compute.log activity for hours, but a restart of nova-compute will
  cause it to check all the instances and then process all the requests
  in rabbit (usually lots of duplicates from frustrated users trying to
  re-send delete requests and get their quota back for another
  deployment).

  In fact, while I was typing this (having restarted nova-compute on
  nodexyzzy before starting), I re-ran the above command to find it now
  silent.

  This happens often enough (once every couple of days at least) but
  we're not sure of how to debug this.  Is there any information we can
  get you about a nova-compute process that is in this unhappy state?

  For the record, here is the last entry in the example node's nova-
  compute.log when I bounced things around 09:00Z:

  2012-04-19 06:35:35 DEBUG nova.virt.libvirt.connection [-]
  Updating host stats from (pid=3428) update_status /usr/lib/python2.7
  /dist-packages/nova/virt/libvirt/connection.py:2467

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/985489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1006878] Re: euca-authorize adds wrong rules for group-to-group rule

2014-12-03 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1006878

Title:
  euca-authorize adds wrong rules for group-to-group rule

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When i add group-to-group rule i get only TCP allowed to pass.
  Should be passed all traffic.

  
  # euca-add-group test1 -d test1
  GROUP   test1   test1

  # euca-add-group test2 -d test2
  GROUP   test2   test2

  # euca-authorize -o test1 test2
  GROUP   test2
  PERMISSION  test2   ALLOWS  tcp GRPNAME test1   FROMCIDR
0.0.0.0/0

  
  # euca-describe-groups
  GROUP   2fa3fa776ca346ba86e130720ddc94c9default default
  GROUP   2fa3fa776ca346ba86e130720ddc94c9test1   test1
  GROUP   2fa3fa776ca346ba86e130720ddc94c9test2   test2
  PERMISSION  2fa3fa776ca346ba86e130720ddc94c9test2   ALLOWS  tcp   
  1   65535   GRPNAME test1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1006878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1037750] Re: --flat_network_dhcp_start argument is being ignored

2014-12-03 Thread Joe Gordon
no response, closing the bug.

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
   Status: Invalid = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1037750

Title:
  --flat_network_dhcp_start argument is being ignored

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  My first bug report, so please be kind in case this isn't actually a
  bug.  (I tried searching and also talking to the #openstack channel
  before submitting this.)

  I'm running Ubuntu Server 12.04 along with OpenStack (see version of
  packages at the bottom).

  I have Nova configured to use Flat DHCP networking in Nova.conf:

  # network specific settings
  --network_manager=nova.network.manager.FlatDHCPManager
  --public_interface=eth0
  --flat_interface=eth1
  --flat_network_bridge=br100
  --fixed_range=10.0.118.128/27
  --network_size=32
  --flat_network_dhcp_start=10.0.118.132
  --flat_injected=False
  --force_dhcp_release

  br0 is coming up as 10.0.118.129
  10.0.118.130 is not supposed to be used (I wanted to reserve it)
  eth1 on the controller is getting interface IP 10.0.118.131.  
  So, I want DHCP to start at 10.0.118.132.

  First problem: OpenStack is ignoring that --flat_network_dhcp_start
  argument.  The first IP it handed out was .130.  The second IP it
  handed out was .131.

  Second problem:  When OpenStack handed out .131, it crashed the
  routing on the cloud controller to where the VMs would not route any
  longer through the public interface (eth0) via the bridge (br100,
  .118.29).  Even after terminating the VM with the .131 IP address, it
  required a reboot of the cloud controller to fix the routing issue.

  It would be nice if the --flat_network_dhcp_start would work properly.
  In absence of that, could it be possible to manage the private IP
  addresses in the same way the floating public IPs are managed (i.e. do
  something like a nova-manage privateip delete xx.xx.xx.xx)

  According to a discussion in #openstack, a workaround for this is to
  update the fixed_ips table in the nova database to manually set
  reserved=1 for the IP Address.  However, that involves directly
  modifying the database.

  Thanks!!

  OpenStack versions installed below:

  ii  nova-api  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - API frontend
  ii  nova-cert 
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - certificate 
management
  ii  nova-common   
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - common files
  ii  nova-compute  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - compute node
  ii  nova-compute-kvm  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - compute node 
(KVM)
  ii  nova-consoleauth  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - Console 
Authenticator
  ii  nova-doc  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - documentation
  ii  nova-network  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - Network manager
  ii  nova-objectstore  
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - object store
  ii  nova-scheduler
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - virtual machine 
scheduler
  ii  nova-volume   
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - storage
  ii  python-nova   
2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute Python libraries
  ii  python-novaclient 2012.1-0ubuntu1 
   client library for OpenStack Compute API

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1037750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1152248] Re: Can't attach a Volume to a VM with NetApp NFS Direct Driver

2014-12-03 Thread Joe Gordon
no response in a while, assuming this isn't an issue anymore closing the
bug.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1152248

Title:
  Can't attach a Volume to a VM with NetApp NFS Direct Driver

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  OS : Debian Wheezy
  Nova version : 2.11.1.30.g8ac304f
  OpenStack installed with DevStack.

  I'm using a NetApp system for storing Volumes provided by NFS and
  expose volumes to VMs with NFS too.

  === Scenario

  cinder.conf :
  (...)

  nfs_shares_config = /etc/cinder/shares.conf
  nfs_mount_point_base = /etc/cinder/volumes

  volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
  netapp_server_hostname = 10.X.X.X
  netapp_server_port = 80
  netapp_login = admin
  netapp_password = secrete
  netapp_transport_type = http
  netapp_vserver = vs0

  1) I can start cinder-volume and see that it mounts my NFS share (hosted on 
NetApp) :
  mount :
  10.X.X.X:/voltest on /etc/cinder/volumes/44a3ed2e30a8aaf14a2ba926457f5666 
type nfs4 
(rw,relatime,vers=4,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.X.X.X,minorversion=0,local_lock=none,addr=10.X.X.X)

  2) I can create a volume :
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | f2aae196-e6e4-4906-9f3f-153909c52f95 | available | None |  1   |
 None|  false   | |
  
+--+---+--+--+-+--+-+

  3) Now I want to attach the volume to my VM and it fails.

  Here is my nova-compute.log TRACE :

  http://paste.openstack.org/show/5Xuo2ECF4O9cZt5unDOR/

  It fails.

  4) I want to attach the volume manually with virsh tool :
  attach-volume instance-001 --source  --target vdb

  And it works !  I can manage the volume from the VM and put some
  files.

  === Investigation

  After investigation, I can see that the bug appears in Nova, here :
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L888-912

  My question : is it normal that we don't have a connector for NFS ? I only 
see iSCSI  FC.
  How is performed the mount ?

  Other interesting point, it works with standard NFS driver. I can
  create and attach volumes to VMs.

  Let me know if you need more informations, I would like to know if
  it's a bug or if I miss something on my configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1152248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282232] Re: The error message when adding an invalid Router is not formatted

2014-12-03 Thread Julie Pichon
This should be fixed by the Neutron client update, closing on the
Horizon side.

** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1282232

Title:
  The error message when adding an invalid Router is not formatted

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Python client library for Neutron:
  Fix Released

Bug description:
  I added a invalid Router (external network without subnet) The problem is on 
the displayed message:
  Error: Failed to set gateway 400-{u'NeutronError': {u'message: u'BadUpdate 
Router request: No subnets defined on network 47..', 
u'type:u'BadRequest'u'dwtail:u}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1282232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1165275] Re: Periodic tasks should dynamically adjust the interval with which they run

2014-12-03 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1165275

Title:
  Periodic tasks should dynamically adjust the interval with which they
  run

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Right now when a periodic task runs and no data/instances/not
  implemented errors... occurs the same periodic task will be
  rescheduled to run in the future, but it will still typically fail in
  the same manner next time as well (no instances, not implemented
  errors) so for these cases we should dynamically adjust the periodic
  task interval in a smarter manner.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1165275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286104] Re: Remove messages that show exceptions.

2014-12-03 Thread Julie Pichon
There were disagreements about the direction of the patch... Since this
should be covered and hopefully improved by the blueprint mentioned in
the description, and the patch author is also the reporter, I'm going to
close this for now. Feel free to reopen if you disagree!

** Changed in: horizon
   Status: In Progress = Invalid

** Changed in: horizon
   Status: Invalid = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1286104

Title:
  Remove messages that show exceptions.

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Some dashboards still show exceptions to the user.

  Until  https://blueprints.launchpad.net/horizon/+spec/improve-error-
  message-details-for-usability gets in-place, better to remove them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1286104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383973] Re: image data cannot be removed when deleting a saving status image

2014-12-03 Thread Jeremy Stanley
** No longer affects: ossa

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1383973

Title:
  image data cannot be removed when deleting a saving status image

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The image data in /var/lib/glance/images/ cannot be removed when I
  delete a image that status is saving.

  1. create a image
   glance image-create --name image-v1 --disk-format raw --container-format 
bare --file xx.image --is-public true

  2. list the created image, the status is saving
  [root@node2 ~]# glance image-list
  
+--+--+-+--+--++
  | ID   | Name | Disk Format | Container 
Format | Size | Status |
  
+--+--+-+--+--++
  | 00ec3d8d-41a5-4f7c-9448-694099a39bcf | image-v1 | raw | bare
 | 18   | saving |
  
+--+--+-+--+--++

  3. delete the created image
  glance image-delete image-v1

  4. the image has been deleted but the image data still exists
  [root@node2 ~]# glance image-list
  ++--+-+--+--++
  | ID | Name | Disk Format | Container Format | Size | Status |
  ++--+-+--+--++
  ++--+-+--+--++

  [root@node2 ~]# ls /var/lib/glance/images
  00ec3d8d-41a5-4f7c-9448-694099a39bcf

  This problem exists in both v1 and v2 API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1383973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254667] Re: host state get error

2014-12-03 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254667

Title:
  host state get error

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When I lanuch the openstack-nova-compute service, the service break on
  updating the status.  When the service try to get the disk's size, it
  read a file which is not exist now, so get the Error. I moved the file
  '/root/folsom/iso/win7.img' to another directory before I install the
  openstack.

   the compute.log is here.

  2013-11-25 18:25:52.015 9557 DEBUG nova.virt.libvirt.driver [-] Updating host 
stats update_status /usr/lib/python2.6/site-p 
ackages/nova/virt/libvirt/driver.py:4803
  2068 2013-11-25 18:25:52.016 9557 DEBUG qpid.messaging.io.raw [-] 
SENT[43c5dd0]: '\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x 
00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00
 \x00\x00\x00\x00\x00\x00\t' writeable 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
  2013-11-25 18:25:52.054 9557 ERROR nova.openstack.common.threadgroup [-] 
[Errno 13] Permission denied: '/root/folsom/iso/wi n7.img'
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstac k/common/threadgroup.py, 
line 117, in wait
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
x.wait()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstac k/common/threadgroup.py, 
line 49, in wait
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/gree nthread.py, line 166, in 
wait
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/even t.py, line 116, in wait
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/hubs /hub.py, line 177, in 
switch
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/gree nthread.py, line 192, in 
main
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstac k/common/service.py, line 
65, in run_service
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/service. py, line 164, in start
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
   2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/compute/ manager.py, line 796, in 
pre_start_hook
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_ad min_context())
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/compute/ manager.py, line 4860, in 
update_available_resource
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
nodenames = set(self.driver.get_available_nodes())
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/dri ver.py, line 956, in 
get_available_nodes
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
stats = self.get_host_stats(refresh=refresh)
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/lib virt/driver.py, line 4373, 
in get_host_stats
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup 
return self.host_state.get_host_stats(refresh=refr esh)
  2013-11-25 18:25:52.054 9557 TRACE nova.openstack.common.threadgroup   

[Yahoo-eng-team] [Bug 1262642] Re: VMware VC driver fails to spawn instance on vCenter 5.5

2014-12-03 Thread Joe Gordon
no feedback, closing

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262642

Title:
  VMware VC driver fails to spawn instance on vCenter 5.5

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Creating instance using vCenter 5.5 fails with the following exception. The 
same works when vCenter 5.1 is used
  vmware driver used is Havana GA code with some backports of fixes in Icehouse

  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py, line 500, in 
spawn
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18] _create_virtual_disk(upload_folder, 
vmdk_path)
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py, line 357, in 
_create_virtual_disk
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18] 
self._session._wait_for_task(instance['uuid'], vmdk_create_task)
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18]   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 897, in 
_wait_for_task
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18] ret_val = done.wait()
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18]   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in wait
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18] return hubs.get_hub().switch()
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18]   File 
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 177, in switch
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18] return self.greenlet.switch()
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18] AttributeError: 'NoneType' object has no 
attribute 'name'
  2013-12-19 12:19:22.782 31835 TRACE nova.compute.manager [instance: 
6932fe94-7300-4e3d-a9e7-c704462def18]

  
  2013-12-19 12:19:22.873 31835 WARNING nova.virt.vmwareapi.driver [-] In 
vmwareapi:_poll_task, Got this error 'NoneType' object has no attribute 'name'
  2013-12-19 12:19:22.874 31835 ERROR nova.openstack.common.loopingcall [-] in 
fixed duration looping call
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall 
Traceback (most recent call last):
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py, line 
78, in _inner
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 926, in 
_poll_task
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall 
done.send_exception(excep)
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 208, in 
send_exception
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall 
return self.send(None, args)
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 150, in send
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall 
assert self._result is NOT_USED, 'Trying to re-send() an already-triggered 
event.'
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall 
AssertionError: Trying to re-send() an already-triggered event.
  2013-12-19 12:19:22.874 31835 TRACE nova.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291637] Re: memcache client race

2014-12-03 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291637

Title:
  memcache client race

Status in OpenStack Identity (Keystone):
  Expired
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Nova uses thread-unsafe memcache client objects in multiple threads.
  For instance, nova-api's metadata WSGI server uses the same
  nova.api.metadata.handler.MetadataRequestHandler._cache object for
  every request. A memcache client object is thread unsafe because it
  has a single open socket connection to memcached. Thus the multiple
  threads will read from  write to the same socket fd.

  Keystoneclient has the same bug. See https://bugs.launchpad.net
  /python-keystoneclient/+bug/1289074 for a patch to fix the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394351] Re: deadlock when delete port

2014-12-03 Thread Joe Gordon
sounds like this is neutron not nova.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394351

Title:
  deadlock when delete port

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  netdemoid=$(neutron net-list | awk '{if($4=='demo-net'){print $2;}}')
  subnetdemoid=$(neutron subnet-list | awk '{if($4=='demo-subnet'){print 
$2;}}')

  exnetid=$(neutron net-list | awk '{if($4=='ext-net'){print $2;}}')
  for i in `seq 1 10`; do
  #boot vm, and create floating ip
  nova boot --image cirros --flavor m1.tiny --nic net-id=$netdemoid 
cirrosdemo${i}
  cirrosdemoid[i]=$(nova list | awk '{if($4=='cirrosdemo${i}'){print 
$2;}}')
  output=$(neutron floatingip-create $exnetid)
  echo $output
  floatipid[i]=$(echo $output | awk '{if($2==id){print $4;}}')
  floatip[i]=$(echo $output | awk '{if($2==floating_ip_address){print 
$4;}}')a
  done

  # Setup router
  neutron router-gateway-set $routerdemoid $exnetid
  neutron router-interface-add demo-router $subnetdemoid
  #wait for VM to be running
  sleep 30

  for i in `seq 1 10`; do
  cirrosfix=$(nova list | awk '{if($4=='cirrosdemo${i}'){print $12;}}')
  cirrosfixip=${cirrosfix#*=}
  output=$(neutron port-list | grep ${cirrosfixip})
  echo $output
  portid=$(echo $output | awk '{print $2;}')
  neutron floatingip-associate --fixed-ip-address $cirrosfixip 
${floatipid[i]} $portid
  neutron floatingip-delete ${floatipid[i]}
  nova delete ${cirrosdemoid[i]}
  done

  
  With several tries, I have one instance in ERROR state:
  2014-11-19 19:41:02.670 8659 DEBUG neutron.context 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/context.py:83
  2014-11-19 19:41:02.671 8659 DEBUG neutron.plugins.ml2.rpc 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Device 
498e7a54-22dd-4e5b-a8db-d6bffb8edd25 details requested by agent 
ovs-agent-overcloud-controller0-d5wwhbhhtlmp with host 
overcloud-controller0-d5wwhbhhtlmp get_device_details 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:90
  2014-11-19 19:41:02.707 8659 DEBUG neutron.openstack.common.lockutils 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Got semaphore db-access lock 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py:168
  2014-11-19 19:41:04.061 8658 ERROR oslo.messaging.rpc.dispatcher 
[req-4303cd41-c87c-44aa-b78a-549fb914ac9c ] Exception during message handling: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') None None
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py,
 line 220, in report_state
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.create_or_update_agent(context, agent_state)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py,
 line 180, in create_or_update_agent
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher return 
self._create_or_update_agent(context, agent)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py,
 line 174, in _create_or_update_agent
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
greenthread.sleep(0)
  

[Yahoo-eng-team] [Bug 1398892] [NEW] - incompletely translated

2014-12-03 Thread David Lyle
Public bug reported:

the - string is translated in some case and not in many more. It
should be consistent.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398892

Title:
  - incompletely translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  the - string is translated in some case and not in many more. It
  should be consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398903] [NEW] copy-from does not work

2014-12-03 Thread Stuart McLaren
Public bug reported:


I get the following error when using copy-from to create an image:


 2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils result = 
fd.read(*args)
 2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils File 
glance/common/utils.py, line 191, in read
 2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils result = 
self.data.read(i)
 2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils AttributeError: 
'ResponseIndexable' object has no attribute 'read'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1398903

Title:
  copy-from does not work

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  
  I get the following error when using copy-from to create an image:

  
   2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils result = 
fd.read(*args)
   2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils File 
glance/common/utils.py, line 191, in read
   2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils result = 
self.data.read(i)
   2014-12-03 16:21:03.387 30059 TRACE glance.api.v1.upload_utils 
AttributeError: 'ResponseIndexable' object has no attribute 'read'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1398903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398904] [NEW] empty value is -, does not need to be set

2014-12-03 Thread David Lyle
Public bug reported:

The default empty value for table cells is -. This does not need to be 
specified. See 
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L136 and
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L305

** Affects: horizon
 Importance: Wishlist
 Assignee: David Lyle (david-lyle)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398904

Title:
  empty value is -, does not need to be set

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The default empty value for table cells is -. This does not need to be 
specified. See 
https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L136 and
  https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L305

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362863] Re: reply queues fill up with unacked messages

2014-12-03 Thread Mehdi Abaakouk
** Changed in: oslo.messaging
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398912] [NEW] create_stubs not documented

2014-12-03 Thread David Lyle
Public bug reported:

The create_stubs decorator is not documented as to expected key, value
content.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: dashboard-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398912

Title:
  create_stubs not documented

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The create_stubs decorator is not documented as to expected key, value
  content.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356373] Re: Glance image-list - shell-init: error retrieving current directory

2014-12-03 Thread Louis Taylor
The root cause in all of the clients seems to be keystoneclient, so
moving it over there.

** Project changed: glance = python-keystoneclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1356373

Title:
  Glance image-list - shell-init: error retrieving current directory

Status in Python client library for Keystone:
  Confirmed

Bug description:
  Description of problem: During recreation of another bug on my setup
  AIO Neutron network, during instance snapshot process before snapshot
  completed uploading I did #glance image-list

  root@orange-vdse tmptriyEJ(keystone_admin)]# glance image-list
  shell-init: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory
  shell-init: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory
  shell-init: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory
  Traceback (most recent call last):
File /usr/bin/glance, line 6, in module
  from glanceclient.shell import main
File /usr/lib/python2.7/site-packages/glanceclient/shell.py, line 30, in 
module
  from keystoneclient.v2_0 import client as ksclient
File /usr/lib/python2.7/site-packages/keystoneclient/v2_0/__init__.py, 
line 2, in module
  from keystoneclient.v2_0.client import Client
File /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py, line 
20, in module
  from keystoneclient import httpclient
File /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py, line 
27, in module
  import keyring
File /usr/lib/python2.7/site-packages/keyring/__init__.py, line 12, in 
module
  from .core import (set_keyring, get_keyring, set_password, get_password,
File /usr/lib/python2.7/site-packages/keyring/core.py, line 180, in 
module
  init_backend()
File /usr/lib/python2.7/site-packages/keyring/core.py, line 59, in 
init_backend
  set_keyring(load_config() or _get_best_keyring())
File /usr/lib/python2.7/site-packages/keyring/core.py, line 105, in 
load_config
  local_path = os.path.join(os.getcwd(), filename)
  OSError: [Errno 2] No such file or directory

  Version-Release number of selected component (if applicable):
  RHEL7
  python-glanceclient-0.13.1-1.el7ost.noarch
  openstack-glance-2014.1.1-1.el7ost.noarch
  python-glance-2014.1.1-1.el7ost.noarch
  openstack-nova-compute-2014.1.1-4.el7ost.noarch

  
  How reproducible:
  Not sure first time, looks like a fluke one off problem to me.

  Steps to Reproduce:
  1. Booted an instance 
  2. Created a snapshot of that running instance
  3. Before snapshot upload completed, ran glance image-list.

  Actual results:
  Glance image-list failed with above trace error

  Expected results:
  Glance image-list should worked without errors.

  On attached Glance log ignore errors before this trace,  related to
  other bug I was testing with low disk space case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-keystoneclient/+bug/1356373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398928] [NEW] inline edit icon size/placement needs fixing

2014-12-03 Thread David Lyle
Public bug reported:

Either the font awesome switch, or another change has made the inline
editing icons different sizes.

Additionally, the pencil shows up over longer text.

CSS fixes are required.

** Affects: horizon
 Importance: Low
 Status: New


** Tags: low-hanging-fruit

** Attachment added: Screenshot from 2014-12-03 11:36:46.png
   
https://bugs.launchpad.net/bugs/1398928/+attachment/4273625/+files/Screenshot%20from%202014-12-03%2011%3A36%3A46.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398928

Title:
  inline edit icon size/placement needs fixing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Either the font awesome switch, or another change has made the inline
  editing icons different sizes.

  Additionally, the pencil shows up over longer text.

  CSS fixes are required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398929] [NEW] Text of sub menu is bigger than the Dashboard name

2014-12-03 Thread Lin Hua Cheng
Public bug reported:


In the menu, the font size of the panel name is way bigger than the
dashboard name.

It might have changed after the update to use font awesome

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Lin Hua Cheng (lin-hua-cheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398929

Title:
  Text of sub menu is bigger than the Dashboard name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  In the menu, the font size of the panel name is way bigger than the
  dashboard name.

  It might have changed after the update to use font awesome

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387320] Re: osprofiler causing crash when update from icehouse

2014-12-03 Thread Alan Pevec
** Tags removed: in-stable-juno juno-backport-potential

** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: glance/juno
   Status: New = Fix Committed

** Changed in: glance/juno
   Importance: Undecided = Critical

** Changed in: glance/juno
 Assignee: (unassigned) = Louis Taylor (kragniz)

** Changed in: glance/juno
Milestone: None = 2014.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1387320

Title:
  osprofiler causing crash when update from icehouse

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance juno series:
  Fix Committed

Bug description:
  Hi, I have issue when i try to update glance from icehouse to juno
  When I start glance-api and glance-registry i have this trace:
  https://gist.github.com/ttwthomas/e30b254cc700268f997d

  
  Thanks to Stuart we resolved this by disabling osprofiler but If osprofiler 
is not in your pipeline it should not be needed to explicitly disable it.

  See https://answers.launchpad.net/glance/+question/256323 for more
  info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1387320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398988] [NEW] dhcp-agent / network bindings out of sync, stopping dhcp-agent

2014-12-03 Thread Ian Wienand
Public bug reported:

dhcp-agent on a neutron host started dying with

---
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/amqp.py, line 
462, in _process_data
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent **args)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/common/rpc.py, line 45, in dispatch
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent neutron_ctxt, 
version, method, namespace, **kwargs)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/db/dhcp_rpc_base.py, line 92, in 
get_active_networks_info
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent networks = 
self._get_active_networks(context, **kwargs)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/db/dhcp_rpc_base.py, line 42, in 
_get_active_networks
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
plugin.auto_schedule_networks(context, host)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/db/agentschedulers_db.py, line 222, 
in auto_schedule_networks
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
self.network_scheduler.auto_schedule_networks(self, context, host)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 122, in auto_schedule_networks
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent context, 
[net_id], active=True)
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/db/agentschedulers_db.py, line 126, 
in get_dhcp_agents_hosting_networks
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
binding.dhcp_agent)]
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/site-packages/neutron/db/agentschedulers_db.py, line 83, 
in is_eligible_agent
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
agent['heartbeat_timestamp'])
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent TypeError: 
'NoneType' object has no attribute '__getitem__'
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
---

further investigation on the neutron server found that
networkdhcpbindings seems to have got out of sync

we can see just one dhcp-agent

---
MariaDB [neutron] select id from agents where agent_type=DHCP agent;
+--+
| id   |
+--+
| 6923675d-5616-4ffe-b2c4-4d130f67973f |
+--+
1 row in set (0.00 sec)
---

but in the network bindings, at least 3 are listed


MariaDB [neutron] select DISTINCT(dhcp_agent_id) from networkdhcpagentbindings;
+--+
| dhcp_agent_id|
+--+
| 6923675d-5616-4ffe-b2c4-4d130f67973f |
| b23f9f97-da04-4f61-bcfb-f8514e43cefd |
| d3e3ac5b-9962-428a-a9f8-6b2a1aba48d8 |
+--+
3 rows in set (0.00 sec)
---

neutron/db/agentschedulers_db.py:get_dhcp_agents_hosting_networks() [1]
doesn't expect this case of a binding with no associated agent, so it
goes off an tries to find the latest heartbeat for the agent, which
leads to the original traceback.

I unfortunately have no idea how this got out of sync; the host has been
upgraded from juno-kilo and neutron and dhcp-agents have been restarted
many, many times for various reasons.

I found two other references [2],[3]

[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/agentschedulers_db.py#n107
[2] 

[Yahoo-eng-team] [Bug 1398992] [NEW] quota-update succeeds for nonexistent tenant_id

2014-12-03 Thread Joe D'Andrea
Public bug reported:

Possibly related to https://bugs.launchpad.net/neutron/+bug/1307506

Issue a CLI request to update the quota for a nonexistent tenant_id (not
in keystone tenant-list).

The update succeeds and neutron quota-list gets a new entry for
tenant_id foo.

$ neutron quota-update --tenant_id foo --port 75
+-+---+
| Field   | Value |
+-+---+
| floatingip  | 50|
| network | 10|
| port| 75|
| router  | 10|
| security_group  | 10|
| security_group_rule | 100   |
| subnet  | 10|
+-+---+

Expected behavior: The CLI (at the very least) or neutron (at most)
would prevent this from succeeding.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- Possibly related to #1307506:
+ Possibly related to 1307506:
  
  Issue a CLI request to update the quota for a nonexistent tenant_id (not
  in keystone tenant-list).
  
  The update succeeds and neutron quota-list gets a new entry for
  tenant_id foo.
  
  $ neutron quota-update --tenant_id foo --port 75
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | network | 10|
  | port| 75|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  +-+---+
  
  Expected behavior: The CLI (at the very least) or neutron (at most)
  would prevent this from succeeding.

** Description changed:

- Possibly related to 1307506:
+ Possibly related to https://bugs.launchpad.net/neutron/+bug/1307506
  
  Issue a CLI request to update the quota for a nonexistent tenant_id (not
  in keystone tenant-list).
  
  The update succeeds and neutron quota-list gets a new entry for
  tenant_id foo.
  
  $ neutron quota-update --tenant_id foo --port 75
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | network | 10|
  | port| 75|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  +-+---+
  
  Expected behavior: The CLI (at the very least) or neutron (at most)
  would prevent this from succeeding.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398992

Title:
  quota-update succeeds for nonexistent tenant_id

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Possibly related to https://bugs.launchpad.net/neutron/+bug/1307506

  Issue a CLI request to update the quota for a nonexistent tenant_id
  (not in keystone tenant-list).

  The update succeeds and neutron quota-list gets a new entry for
  tenant_id foo.

  $ neutron quota-update --tenant_id foo --port 75
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | network | 10|
  | port| 75|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  +-+---+

  Expected behavior: The CLI (at the very least) or neutron (at most)
  would prevent this from succeeding.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398996] [NEW] linuxbridge mechansim driver ignores enable_security_group

2014-12-03 Thread Darragh O'Reilly
Public bug reported:

The linuxbridge mechansim driver is ignoring enable_security_group in
ml2_conf.ini.

i.e. neutron security groups are still used when
/etc/neutron/plugins/ml2/ml2_conf.ini has:

[securitygroup]
enable_security_group = False

** Affects: neutron
 Importance: Undecided
 Assignee: Darragh O'Reilly (darragh-oreilly)
 Status: In Progress


** Tags: lb

** Tags added: lb

** Changed in: neutron
 Assignee: (unassigned) = Darragh O'Reilly (darragh-oreilly)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398996

Title:
   linuxbridge mechansim driver ignores enable_security_group

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The linuxbridge mechansim driver is ignoring enable_security_group in
  ml2_conf.ini.

  i.e. neutron security groups are still used when
  /etc/neutron/plugins/ml2/ml2_conf.ini has:

  [securitygroup]
  enable_security_group = False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398999] [NEW] Block migrate with attached volumes copies volumes to themselves

2014-12-03 Thread Chris St. Pierre
Public bug reported:

When an instance with attached Cinder volumes is block migrated, the
Cinder volumes are block migrated along with it. If they exist on shared
storage, then they end up being copied, over the network, from
themselves to themselves. At a minimum, this is horribly slow and de-
sparses a sparse volume; at worst, this could cause massive data
corruption.

More details at http://lists.openstack.org/pipermail/openstack-
dev/2014-June/038152.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398999

Title:
  Block migrate with attached volumes copies volumes to themselves

Status in OpenStack Compute (Nova):
  New

Bug description:
  When an instance with attached Cinder volumes is block migrated, the
  Cinder volumes are block migrated along with it. If they exist on
  shared storage, then they end up being copied, over the network, from
  themselves to themselves. At a minimum, this is horribly slow and de-
  sparses a sparse volume; at worst, this could cause massive data
  corruption.

  More details at http://lists.openstack.org/pipermail/openstack-
  dev/2014-June/038152.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386147] Re: live-migration failed because of Filter ComputeFilter returned 0 hosts, the instance's status is still migrating.

2014-12-03 Thread Rong Han ZTE
** Changed in: nova/icehouse
   Status: Invalid = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1386147

Title:
  live-migration failed because of  Filter ComputeFilter returned 0
  hosts, the instance's status is still migrating.

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) icehouse series:
  Incomplete

Bug description:
  I have three compute nodes, and one instance on host opencos179-24.

  A live-migration failed, the content of  nova-scheduler.log  show
  that  Filter ComputeFilter returned 0 hosts.

  But the instance's status is still migrating.

  I hope the instance could rollback.

  log is as follows:
  http://paste.openstack.org/show/125266/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1386147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237807] Re: Network admin_state _up is not working

2014-12-03 Thread Armando Migliaccio
@Yair: from comment #14 I sense that you are making wrong assumptions as
to what a real world data center or cloud environment actually works.

A resource like 'network' has two status attributes: admin_status_up and
status; these are not to be confused. As of now, flipping the
admin_status_up from True to False has no effect in most plugins, it's
just a toggle operation on the DB, and some plugins do not even support
it (ie. barf at the update operation when/if tried, like the NSX
plugin).

In my opinion this is the semantic:

admin_status_up: it's the management state of the resource. An admin can
choose to disable/enable this resource for management purposes.
Management plane downtime, must not cause a data plane loss. If we want
to mirror this to the compute world, it's like when I put a hypervisor
in maintenance mode: perhaps I need to evacuate the host and I cannot
allow for more vm's to get spawned, or I need to do some sort of
reactive maintenance and I don't want the management framework to fiddle
with it while I am on it. If you think that management downtime == data
plane downtime, you clearly have a wrong view of the real world!!

status: this is the status of the fabric, the data plane status if you
will. Following the analogy with compute, this means that your host is
either hosed, not feeling well, and something is broken; it pleas for
troubleshooting!

Now, as for a Neutron network, the admin_status_up may not be entirely
useful, but deprecating it or marking for removal is wrong. And even
worse, forcing admin_status_up=False to cause a data plane loss.

** Changed in: neutron
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237807

Title:
  Network admin_state _up is not working

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  By default network is created with admin_statue_up True

  Network will not be deactivated  when user issue update to
  admin_statue_up False , traffic from VMs still works and the network
  STAUS is still Active

  
  [root@puma04 ~(keystone_admin_tenant1)]$neutron net-list
  
+--+-++
  | id   | name| subnets
|
  
+--+-++
  | 62e4a6ac-970e-49fe-81c1-1115dee7f800 | ext_net | 
6dd43d99-e226-4e85-9471-89091c3ad031   |
  | 78c1f62e-22c0-42e0-9377-e9d247862da4 | net-vlan201 | 
534d328d-441b-4fe6-b4ba-3c397391bea2 89.66.66.0/24 |
  
+--+-++

  [root@puma04 ~(keystone_admin_tenant1)]$neutron net-update 
78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up False   
  Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
  [root@puma04 ~(keystone_admin_tenant1)]$neutron net-show 
78c1f62e-22c0-42e0-9377-e9d247862da4
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | False| 
  | id  | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
  | name| net-vlan201  |
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   |  Still 
ACTIVE 
  | subnets | 534d328d-441b-4fe6-b4ba-3c397391bea2 |
  | tenant_id   | c7431cff351542c28427028d9befe1fb |
  +-+--+
  [root@puma04 ~(keystone_admin_tenant1)]$neutron net-update 
78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up True  
  Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
  [root@puma04 ~(keystone_admin_tenant1)]$neutron net-show 
78c1f62e-22c0-42e0-9377-e9d247862da4
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
  | name| net-vlan201  |
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   | 
  | subnets | 534d328d-441b-4fe6-b4ba-3c397391bea2 |
  | tenant_id   | c7431cff351542c28427028d9befe1fb |
  +-+--+

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1399065] [NEW] Quota updated through nova api not refelected in Horizon

2014-12-03 Thread Sumanth
Public bug reported:

I am trying to update the quota of the user using the nova.quota.update(
... ), when i do this the quota is reflected in the commmond line but
the same updated values are not reflected in the horizon dashboard.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399065

Title:
  Quota updated through nova api not refelected in Horizon

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am trying to update the quota of the user using the
  nova.quota.update( ... ), when i do this the quota is reflected in the
  commmond line but the same updated values are not reflected in the
  horizon dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp