[Yahoo-eng-team] [Bug 1494330] Re: envionment markers with comments error in pbr

2015-09-10 Thread Robert Collins
*** This bug is a duplicate of bug 1487835 ***
https://bugs.launchpad.net/bugs/1487835

I've reproduced with pbr 1.7.0

keystone$ pip list
docopt (0.6.2)
extras (0.0.3)
formasaurus (0.2)
linecache2 (1.0.0)
pbr (1.7.0)
pip (7.1.2)
python-mimeparse (0.1.4)
setuptools (18.3.1)
six (1.9.0)
testtools (1.8.0)
tldextract (1.6)
tqdm (1.0)
traceback2 (1.4.0)
unittest2 (1.1.0)
wheel (0.24.0)
zope.component (4.2.2)
zope.event (4.0.3)
zope.i18nmessageid (4.0.3)
zope.interface (4.1.2)
zope.location (4.0.3)
zope.proxy (4.1.6)
zope.schema (4.4.2)
zope.security (3.8.3)
zope.untrustedpython (4.0.0)


python setup.py egg_info
error in setup command: Invalid environment marker: (python_version=='2.7' # 
MPL)

git pull https://review.openstack.org/openstack/keystone
refs/changes/00/222000/1 in a keystone tree to setup the environment


** This bug has been marked a duplicate of bug 1487835
   Misparse of some comments in requirements

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1494330

Title:
  envionment markers with comments error in pbr

Status in Keystone:
  Confirmed

Bug description:
  https://review.openstack.org/#/c/222000/ in keystone is a new
  requirements update that came after
  https://review.openstack.org/203336 requirements update.

  the keystone change is failing, when I run it locally the output
  includes:

Running setup.py install for keystone   


  Complete output from command /opt/stack/keystone/.tox/py27/bin/python2.7 
-c "import setuptools, 
tokenize;__file__='/tmp/pip-UCUFAy-build/setup.py';exec(compile(getattr(tokenize,
 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
install --record /tmp/pip-ZKQdft-record/install-record.txt 
--single-version-externally-managed --compile --install-headers 
/opt/stack/keystone/.tox/py27/include/site/python2.7/keystone:   
  error in setup command: Invalid environment marker: 
(python_version=='2.7' # MPL)   

  
  So it looks like something isn't handling comments in setup.cfg lines, or 
comments can't be put in setup.cfg lines.

  A couple of options:

  1) Remove the comment from the global requirements file.
  2) Have the requirements update tool strip comments when updating setup.cfg.
  3) Maybe it's pbr that needs to handle it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1494330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-09-18 Thread Robert Collins
** No longer affects: testrepository

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  2014-08-15 13:46:09.158 | raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1114634] Re: baremetal deploy does file injection on local disk

2013-05-07 Thread Robert Collins
I'm going to close this with prejudice: having thought about it, this
would lead to unencrypted - or sniffable keys - same thing - disclosure
of root passwords.

** No longer affects: tripleo

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1114634

Title:
  baremetal deploy does file injection on local disk

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Currently, baremetal deploys do the following:
   - download the image to the nova-compute host per-bm-node
   - convert to raw
   - mount
   - fiddle with contents
   - umount
   - iscsi mount the target
   - dd
   - iscsi umount

  If we instead did:
   - download the image to the nova-compute host per-glance-uuid
   - convert to raw
   - iscsi mount the target
   - dd
   - mount
   - fiddle with contents
   - umount
   - iscsi umount

  Then we wouldn't need a local image per target machine (we can
  reproduce the injection as needed from the source image). This would
  free up many GB or even TB on large deployments, and is compatible
  with the long term desire to make disk injection either non-existent,
  or at least optional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1114634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178568] Re: possible race condition during bare metal deploys

2013-05-10 Thread Robert Collins
ok, this is a nonissue; the issue was the deploy-helper opaque error.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178568

Title:
  possible race condition during bare metal deploys

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  There were no obvious failures during the deploy. This is what was
  logged:

  sudo grep 5f305000-ddf2-44b6-be74-059c88cf2109 /var/log/upstart/*
  /var/log/upstart/nova-api.log:2013-05-10 07:52:11,743.743 14328 INFO 
nova.osapi_compute.wsgi.server [req-ced5c8c5-f5b7-4158-9690-d9f59c590932 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] 10.10.16.134 
"GET 
/v2/c30643483cd849a6b61e98c6ccc03a66/servers/5f305000-ddf2-44b6-be74-059c88cf2109
 HTTP/1.1" status: 200 len: 1986 time: 0.0977318
  /var/log/upstart/nova-baremetal-deploy-helper.log:2013-05-10 07:51:31,210.210 
14224 INFO nova.virt.baremetal.deploy_helper [-] request is queued: node 69, 
params {'swap_mb': 1, 'iqn': 'iqn-5f305000-ddf2-44b6-be74-059c88cf2109', 
'image_path': u'/var/lib/nova/instances/instance-0096/disk', 'address': 
'10.10.16.173', 'pxe_config_path': 
u'/tftpboot/5f305000-ddf2-44b6-be74-059c88cf2109/config', 'port': '3260', 
'lun': '1', 'root_mb': 10240}
  /var/log/upstart/nova-baremetal-deploy-helper.log:2013-05-10 07:51:31,210.210 
14224 INFO nova.virt.baremetal.deploy_helper [-] start deployment for node 69, 
params {'swap_mb': 1, 'iqn': 'iqn-5f305000-ddf2-44b6-be74-059c88cf2109', 
'image_path': u'/var/lib/nova/instances/instance-0096/disk', 'address': 
'10.10.16.173', 'pxe_config_path': 
u'/tftpboot/5f305000-ddf2-44b6-be74-059c88cf2109/config', 'port': '3260', 
'lun': '1', 'root_mb': 10240}
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:03,597.597 23016 AUDIT 
nova.compute.manager [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Starting instance...
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,319.319 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Attempting claim: memory 512 MB, disk 10 
GB, VCPUs 1
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,319.319 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Total Memory: 98304 MB, used: 0 MB
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,319.319 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Memory limit: 98304 MB, free: 98304 MB
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,320.320 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Total Disk: 2048 GB, used: 0 GB
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,320.320 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Disk limit not specified, defaulting to 
unlimited
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,321.321 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Total CPU: 24 VCPUs, used: 0 VCPUs
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,321.321 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] CPU limit not specified, defaulting to 
unlimited
  /var/log/upstart/nova-compute.log:2013-05-10 07:44:07,322.322 23016 AUDIT 
nova.compute.claims [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
5f305000-ddf2-44b6-be74-059c88cf2109] Claim successful
  /var/log/upstart/nova-compute.log:2013-05-10 07:47:57,976.976 23016 INFO 
nova.compute.manager [-] [instance: 5f305000-ddf2-44b6-be74-059c88cf2109] 
During sync_power_state the instance has a pending task. Skip.
  /var/log/upstart/nova-compute.log:2013-05-10 07:51:31,864.864 23016 ERROR 
nova.virt.baremetal.driver [req-e10b52ef-c85b-446b-9fa8-fa7394c081dc 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] Error 
deploying instance 5f305000-ddf2-44b6-be74-059c88cf2109 on baremetal nod

[Yahoo-eng-team] [Bug 1178118] Re: swap partition location prevents cloud-init growing the root partition on baremetal nodes

2013-05-13 Thread Robert Collins
I'm going to invalid this - we should just define a much larger flavor.
E.g. user error.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178118

Title:
  swap partition location prevents cloud-init growing the root partition
  on baremetal nodes

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  We need the root partition to be last on the disk (and perhaps no swap
  partition by default, but thats a separate discussion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1178118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178378] Re: confused baremetal instance thinks its off, is clearly operational

2013-05-20 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Committed

** Changed in: tripleo
 Assignee: (unassigned) => Devananda van der Veen (devananda)

** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178378

Title:
  confused baremetal instance thinks its off, is clearly operational

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  | 7b12b6aa-ea62-4165-82b5-68c1ebdfbb45 | foo  | SHUTOFF | None   | 
Shutdown|
    ctlplane=192.0.2.34 |
  but
  64 bytes from 192.0.2.34: icmp_req=1 ttl=64 time=0.491 ms
  64 bytes from 192.0.2.34: icmp_req=2 ttl=64 time=0.390 ms

  and I'm also connected to the console and it's running as it was ~ 8 hours 
ago when I went to bed.
   uptime
   14:12:21 up  9:50,  1 user,  load average: 0.00, 0.01, 0.05

  I've marked this critical because once the confusion happens, the next time 
IPMI polling works, this is logged:
  [instance: bfb6957b-d23d-4716-bc9d-96d2878b738d] Instance is not stopped. 
Calling the stop 
API.

  and nova promptly powers the machine off via IPMI, destroying any
  state on the machine.

  It can be powered back on via 'nova start $UUID'.

  mysql> select * from instances where 
uuid='7b12b6aa-ea62-4165-82b5-68c1ebdfbb45';
  
+-+-+++-+--+--+--+--+--+--+--+--+-+--+---+---+--++---++-+-+---+--+-+---++-+-+--+-+--+--+--+--+--+--++--+-+--+--++---+-+--+---+--+-+
  | created_at  | updated_at  | deleted_at | id | internal_id | 
user_id  | project_id   | image_ref 
   | kernel_id| ramdisk_id  
 | launch_index | key_name | key_data | power_state | 
vm_state | memory_mb | vcpus | hostname | host   | user_data | reservation_id | 
scheduled_at| launched_at | terminated_at | display_name | 
display_description | availability_zone | locked | os_type | launched_on | 
instance_type_id | vm_mode | uuid | 
architecture | root_device_name | access_ip_v4 | access_ip_v6 | config_drive | 
task_state | default_ephemeral_device | default_swap_device | progress | 
auto_disk_config | shutdown_terminate | disable_terminate | root_gb | 
ephemeral_gb | cell_name | node | deleted |
  
+-+-+++-+--+--+--+--+--+--+--+--+-+--+---+---+--++---++-+-+---+--+-+---++-+-+--+-+--+--+--+--+--+--++--+-+--+--++---+-+--+---+--+-+
  | 2013-05-09 08:48:15 | 2013-05-09 09:05:26 | NULL   | 57 |NULL | 
baa113f6f7994ddd9c7d86945768616e | 0d4df5d4fee24f18b8b5f425eb81e0c4 | 
0b6e64a9-4867-4742-a964-f52951c93123 | 615727b1-07d2-4071-b5f4-6ee432972df9 | 
9e5d7470-f3ed-4604-8fe7-894822d86016 |0 | NULL | NULL | 
  4 | stopped  |   512 | 1 | foo  | ubuntu | NULL  | 
r-kunwlpa2 | 2013-05-09 08:48:15 | 2013-05-09 08:54:59 | NULL  | 
foo  | foo | NULL  |  0 | NULL| 
ubuntu  |6 | NULL| 7b12b6aa-ea62-4165-82b5-68c1ebdfbb45 
| NULL | NULL | NULL | NULL |  
| NULL   | NULL

[Yahoo-eng-team] [Bug 1182629] Re: security group rule listing doesn't show details

2013-05-22 Thread Robert Collins
Uhm, but its not wildcarded. See the iptables dump.

** Changed in: quantum
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1182629

Title:
  security group rule listing doesn't show details

Status in OpenStack Quantum (virtual network service):
  New

Bug description:
  I have a new quantum environment, with default security groups - and they 
have blank protocol, ip prefix etc:
  quantum security-group-rule-list
  
+--++---+--+--+-
  | id   | security_group | direction | 
protocol | remote_ip_prefix | remote_group
  
+--++---+--+--+-
  | 028aec88-15db-4aef-aa6d-0882468b393a | default| egress| 
 |  | 
  | 316c8156-a804-4181-8a06-0d470eaf2612 | default| ingress   | 
 |  | default 
  | 33049251-7a67-4efd-88bd-06bf05d05896 | default| ingress   | 
 |  | default 
  | 55250bab-777f-4519-a330-760fdaa2b9b9 | default| egress| 
 |  | 
  | 586ffa9b-fe17-4a16-8e9b-61cdc2097a01 | default| egress| 
 |  | 
  | 58d618c0-19a4-4b20-ba64-f8a393db8def | default| ingress   | 
 |  | default 
  | 5b11d13c-7e5a-424c-8364-b199dc07ef3b | default| egress| 
 |  | 
  | b3ad9ac9-56b2-4786-acca-359ff292d5cd | default| ingress   | 
 |  | default 
  
+--++---+--+--+-

  But when I check with iptables, one can see they are filtering ports
  (e.g. bootps/bootpc):

  :quantum-filter-top - [0:0]
  :quantum-openvswi-FORWARD - [0:0]
  :quantum-openvswi-INPUT - [0:0]
  :quantum-openvswi-OUTPUT - [0:0]
  :quantum-openvswi-iaa210549-d - [0:0]
  :quantum-openvswi-local - [0:0]
  :quantum-openvswi-oaa210549-d - [0:0]
  :quantum-openvswi-sg-chain - [0:0]
  :quantum-openvswi-sg-fallback - [0:0]
  -A INPUT -j quantum-openvswi-INPUT
  -A FORWARD -j quantum-filter-top
  -A FORWARD -j quantum-openvswi-FORWARD
  -A OUTPUT -j quantum-filter-top
  -A OUTPUT -j quantum-openvswi-OUTPUT
  -A quantum-filter-top -j quantum-openvswi-local
  -A quantum-openvswi-FORWARD -m physdev --physdev-out tapaa210549-df 
--physdev-is-bridged -j quantum-openvswi-sg-chain
  -A quantum-openvswi-FORWARD -m physdev --physdev-in tapaa210549-df 
--physdev-is-bridged -j quantum-openvswi-sg-chain
  -A quantum-openvswi-INPUT -m physdev --physdev-in tapaa210549-df 
--physdev-is-bridged -j quantum-openvswi-oaa210549-d
  -A quantum-openvswi-iaa210549-d -m state --state INVALID -j DROP
  -A quantum-openvswi-iaa210549-d -m state --state RELATED,ESTABLISHED -j RETURN
  -A quantum-openvswi-iaa210549-d -s 192.0.2.32/32 -p udp -m udp --sport 67 
--dport 68 -j RETURN
  -A quantum-openvswi-iaa210549-d -j quantum-openvswi-sg-fallback
  -A quantum-openvswi-oaa210549-d -m mac ! --mac-source FA:16:3E:7F:4F:76 -j 
DROP
  -A quantum-openvswi-oaa210549-d -p udp -m udp --sport 68 --dport 67 -j RETURN
  -A quantum-openvswi-oaa210549-d ! -s 192.0.2.33/32 -j DROP
  -A quantum-openvswi-oaa210549-d -p udp -m udp --sport 67 --dport 68 -j DROP
  -A quantum-openvswi-oaa210549-d -m state --state INVALID -j DROP
  -A quantum-openvswi-oaa210549-d -m state --state RELATED,ESTABLISHED -j RETURN
  -A quantum-openvswi-oaa210549-d -j RETURN
  -A quantum-openvswi-oaa210549-d -j quantum-openvswi-sg-fallback
  -A quantum-openvswi-sg-chain -m physdev --physdev-out tapaa210549-df 
--physdev-is-bridged -j quantum-openvswi-iaa210549-d
  -A quantum-openvswi-sg-chain -m physdev --physdev-in tapaa210549-df 
--physdev-is-bridged -j quantum-openvswi-oaa210549-d
  -A quantum-openvswi-sg-chain -j ACCEPT
  -A quantum-openvswi-sg-fallback -j DROP
  COMMIT

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1182629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174952] Re: baremetal nodes are garbage collected incorrectly

2013-05-24 Thread Robert Collins
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1174952

Title:
  baremetal nodes are garbage collected incorrectly

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Baremetal nodes may be deleted from nova.compute_nodes by
  ComputeManager.update_available_resources() if an instance has been
  allocated to that node, and even while the deployment is still in
  process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1174952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178092] Re: second boot during baremetal deploy does not configure netboot : will hang unless the machine attempts PXE automatically

2013-07-06 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178092

Title:
  second boot during baremetal deploy does not configure netboot : will
  hang unless the machine attempts PXE automatically

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  the second boot during a bare metal deploy reboots from within the
  deploy ramdisk rather than an IMPI 'please netboot' command, which
  means BIOSes that attempt local disk before netboot will fail to
  deploy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1178092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199443] Re: quantum-server won't start

2013-07-10 Thread Robert Collins
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tripleo
 Assignee: (unassigned) => Derek Higgins (derekh)

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1199443

Title:
  quantum-server won't start

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Error below while trying to start quantum-server

  I think its related to the quantum -> neutron rename (at least it
  started happening over the weekend)

   Traceback (most recent call last):
 File "/opt/stack/venvs/quantum/bin/quantum-server", line 8, in 
   load_entry_point('neutron==2013.2.a3.ga3d0a44', 'console_scripts', 
'quantum-server')()
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/neutron/server/__init__.py",
 line 38, in main
   neutron_service = service.serve_wsgi(service.NeutronApiService)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/neutron/service.py",
 line 98, in serve_wsgi
   service.start()
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/neutron/service.py",
 line 64, in start
   self.wsgi_app = _run_wsgi(self.app_name)  
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/neutron/service.py",
 line 107, in _run_wsgi
   app = config.load_paste_app(app_name) 
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/neutron/common/config.py",
 line 144, in load_paste_app
   app = deploy.loadapp("config:%s" % config_path, name=app_name)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 247, in loadapp
   return loadobj(APP, uri, name=name, **kw) 
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 272, in loadobj
   return context.create() 
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 710, in create
   return self.object_type.invoke(self)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 144, in invoke
   **context.local_conf)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/util.py",
 line 56, in fix_call
   val = callable(*args, **kw)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/urlmap.py", 
line 25, in urlmap_factory
   app = loader.get_app(app_name, global_conf=global_conf)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 350, in get_app
   name=name, global_conf=global_conf).create()
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 710, in create
   return self.object_type.invoke(self)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 144, in invoke
   **context.local_conf)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/util.py",
 line 56, in fix_call
   val = callable(*args, **kw)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/neutron/auth.py", 
line 58, in pipeline_factory
   filters = [loader.get_filter(n) for n in pipeline[:-1]]
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 354, in get_filter
   name=name, global_conf=global_conf).create()
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 366, in filter_context
   FILTER, name=name, global_conf=global_conf)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 458, in get_context
   section)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 517, in _context_from_explicit
   value = import_string(found_expr)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py",
 line 22, in import_string
   return pkg_resources.EntryPoint.parse("x=" + s).load(False)
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/pkg_resources.py", 
line 2260, in load
   entry = __import__(self.module_name, globals(),globals(), ['__name__'])
 File 
"/opt/stack/venvs/quantum/local/lib/python2.7/site-packages/quantum/api/__init__.py",
 line 33, in 
   sys.modules['quantum.api.extensions'] = extensions
   AttributeError: 'NoneType' object has no attribute 'modules'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1199443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post 

[Yahoo-eng-team] [Bug 1182732] Re: bad dependency on quantumclient breaks metadata agent

2013-08-08 Thread Robert Collins
Huh, Invalid - how do you get that?

It's either fixed or it's not.

I'm not sure if 2.2.6 was the earliest client release to have the fix. 
The current requirements list python-neutronclient>=2.2.3,<3.0.0
So if 2.2.3 has the requisite code, then this is fix released.
If it doesn't, then this is still a bug.

Bugs should only be made invalid if they are shown to be *not a bug* -
e.g. a) the relevant code has been deleted or b) the bug reporter was
mistaken.

** Changed in: neutron
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1182732

Title:
  bad dependency on quantumclient breaks metadata agent

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  https://review.openstack.org/#/c/24639/ which landed in trunk depends on this 
change in quantumclient
  commit 6f7e76ec3b4777816e0a5d7eaeef0026d75b60e6
  Author: Oleg Bondarev 
  Date:   Tue Mar 12 16:13:02 2013 +0400

  but quantum depends on quantumclient >=2.2.0 - however the
  quantumclient change isn't available in a release, so the quantum
  change should not have been merged - or should have been merged with
  backwards compat.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1182732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213987] Re: Setup fails on requirements with pyudev not found

2013-08-22 Thread Robert Collins
** Changed in: tripleo
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1213987

Title:
  Setup fails on requirements with pyudev not found

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  When trying to install Neutron from master, I get the following error
  on pyudev [1]. The issue is occuring since this review [2] was merged
  (was fixing bug #1212385)

  [1] : (neutron)bauzas@b019750-ux:~/neutron$ pip install -i 
http://pypi.openstack.org/ neutron/
  Unpacking ./neutron
Running setup.py egg_info for package from 
file:///home/bauzas/neutron/neutron
  [pbr] Processing SOURCES.txt
  warning: LocalManifestMaker: standard file '-c' not found
  
  [pbr] In git context, generating filelist from git
  warning: no files found matching 'AUTHORS'
  warning: no files found matching 'ChangeLog'
  warning: no previously-included files matching '*.pyc' found anywhere in 
distribution
  warning: no files found matching 'AUTHORS'
  warning: no files found matching 'ChangeLog'
  warning: no previously-included files found matching '.gitignore'
  warning: no previously-included files found matching '.gitreview'
  warning: no previously-included files matching '*.pyc' found anywhere in 
distribution
  Downloading/unpacking pyudev (from neutron==2013.2.a313.gfc3ab14)
Could not find any downloads that satisfy the requirement pyudev (from 
neutron==2013.2.a313.gfc3ab14)
  Cleaning up...
  No distributions at all found for pyudev (from neutron==2013.2.a313.gfc3ab14)
  Storing complete log in /home/bauzas/.pip/pip.log

  
  [2] : https://review.openstack.org/#/c/42170/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1213987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178529] Re: default bootstack api's ratelimit at too low a rate

2013-08-22 Thread Robert Collins
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178529

Title:
  default bootstack api's ratelimit at too low a rate

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Just registering 30 or so nodes with nova baremetal-node-create in a
  loop starts ratelimiting - and the nova client doesn't retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1178529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184484] Re: Quantum default settings will cause deadlocks due to overflow of sqlalchemy_pool

2013-09-02 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1184484

Title:
  Quantum default settings will cause deadlocks due to overflow of
  sqlalchemy_pool

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  quantum-server by default will create an sqlalchemy_pool of 5
  connections. This will start to fail as more and more data and compute
  nodes are added. Raising it to 40 seems to stop the problem, it may
  need to go higher. Raising it to 20 still results in:

  TimeoutError: QueuePool limit of size 20 overflow 10 reached,
  connection timed out, timeout 30

  Raising it to 40 seems to have restored responsiveness. I suspect it
  may be too low as the number of clients increases.

  [DATABASE]
  sqlalchemy_pool_size = 40

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1184484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213967] Re: Seed VM filtering out compute resources

2013-09-09 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213967

Title:
  Seed VM filtering out compute resources

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  At some stage last week nova started failing to boot baremetal
  instances, from the looks of it they are being filtered out by the
  compute_capabilities_filter

  This workaround resolves the problem
  diff --git a/scripts/setup-baremetal b/scripts/setup-baremetal
  index 0a1e46d..132cbc8 100755
  --- a/scripts/setup-baremetal
  +++ b/scripts/setup-baremetal
  @@ -20,6 +20,6 @@ deploy_ramdisk_id=$(glance image-create --name 
bm-deploy-ramdisk --public \
   
   nova flavor-delete baremetal || true
   nova flavor-create baremetal auto $2 $3 $1
  -nova flavor-key baremetal set "cpu_arch"="$arch" \
  +nova flavor-key baremetal set \
   "baremetal:deploy_kernel_id"="$deploy_kernel_id" \
   "baremetal:deploy_ramdisk_id"="$deploy_ramdisk_id"

  
  Although its probably not the correct fix, 

  I think the problem was caused by this merge 
https://review.openstack.org/#/c/40994/5  
  as cap.cpu_arch doesn't exist

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180454] Re: Cannot orchestrate PXE booting from nova

2013-09-14 Thread Robert Collins
(This was in H3).

** Changed in: nova
 Assignee: Devananda van der Veen (devananda) => dkehn (dekehn)

** Changed in: nova
   Status: Incomplete => Fix Released

** Changed in: nova
Milestone: None => havana-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180454

Title:
  Cannot orchestrate PXE booting from nova

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  Nova baremetal requires PXE booting of nodes, but the network code in
  nova doesn't arrange for PXE DHCP options to be supplied [by quantum].

  In the same manner that the MAC address limitations are exported by
  the driver, DHCP options should be exported by the driver and then
  passed onto the network provider - and for the quantum driver we can
  pass these onto the port for quantum.

  In nova/compute/manager.py:
  def _build_instance(self, context, request_spec, filter_properties,
  ...
  macs = self.driver.macs_for_instance(instance)
  
  network_info = self._allocate_network(context, instance,
  requested_networks, macs, security_groups)

  
  I suggest changing that to be something like
  macs = self.driver.macs_for_instance(instance)
  dhcp_options = 
self.driver.dhcp_options_for_instance(instance)
  network_info = self._allocate_network(context, instance,
  requested_networks, macs, security_groups, 
dhcp_options)

  that calls _allocate_network which calls
  self.network_api.allocate_for_instance

  - and self.network_api for quantum environments is
  nova/network/quantumv2/api.py's API instance - so once dhcp_options
  gets down to there, you can use the Quantum API's directly and poke it
  in when the port is allocated/updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178105] Re: within bootstack, glance seems to die after 24 hours

2013-09-19 Thread Robert Collins
Have not seen this - can't reproduce

** Changed in: tripleo
   Status: Triaged => Invalid

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1178105

Title:
  within bootstack, glance seems to die after 24 hours

Status in tripleo - openstack on openstack:
  Invalid

Bug description:
  we've an instance up for 3 days now and glance-reg/glance-api have had
  to be restarted twice, at 24 hour intervals, to unwedge nova etc.
  'glance image-list' returns a 401 when the problem is displaying
  itself. This may be a config issue, or a glance issue.

  I'm adding a task on glance so glance devs can ask for whatever info
  will help with identifying the cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1178105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180454] Re: Cannot orchestrate PXE booting from nova

2013-10-01 Thread Robert Collins
This works in tripleo now; though it's not the default yet.

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180454

Title:
  Cannot orchestrate PXE booting from nova

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Nova baremetal requires PXE booting of nodes, but the network code in
  nova doesn't arrange for PXE DHCP options to be supplied [by quantum].

  In the same manner that the MAC address limitations are exported by
  the driver, DHCP options should be exported by the driver and then
  passed onto the network provider - and for the quantum driver we can
  pass these onto the port for quantum.

  In nova/compute/manager.py:
  def _build_instance(self, context, request_spec, filter_properties,
  ...
  macs = self.driver.macs_for_instance(instance)
  
  network_info = self._allocate_network(context, instance,
  requested_networks, macs, security_groups)

  
  I suggest changing that to be something like
  macs = self.driver.macs_for_instance(instance)
  dhcp_options = 
self.driver.dhcp_options_for_instance(instance)
  network_info = self._allocate_network(context, instance,
  requested_networks, macs, security_groups, 
dhcp_options)

  that calls _allocate_network which calls
  self.network_api.allocate_for_instance

  - and self.network_api for quantum environments is
  nova/network/quantumv2/api.py's API instance - so once dhcp_options
  gets down to there, you can use the Quantum API's directly and poke it
  in when the port is allocated/updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180178] Re: Instance IP addresses are re-used even when previous instance could not be powered off

2013-10-01 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180178

Title:
  Instance IP addresses are re-used even when previous instance could
  not be powered off

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Expected:

    When a baremetal instance is deleted, but cannot be powered off, its
  ip addresses should not be assigned to any new instances.

  Actual:
    The still-on baremetal instance's IP may be reassigned to a new instance.

  Environment:
    - nova trunk as it existed a few days ago
    - IPMI power driver
    - quantum-ovs trunk

  Repro:
    - boot a baremetal instance
    - nova delete the instance, but ensure it cannot be powered off by nova
    - boot new instances until a new one gets the same ip - now there are two 
nodes who think they have the same ip.

  nova-list excerpt:

  | eb25c949-e1d4-48a5-a041-8aa8a53becb5 | notcompute-clint2.notcompute 
 | ACTIVE | deleting   | Running | ctlplane=10.10.16.139 |
  | 4959c430-7a4a-408a-9a8c-43bb54cea06c | notcompute-clint3.notcompute 
 | ACTIVE | None   | Running | ctlplane=10.10.16.139 |

  nova-compute.log of instance boot followed by instance delete with
  power-off failure:

  /var/log/upstart/nova-compute.log:2013-05-15 01:18:58,432.432 23016 INFO 
nova.virt.baremetal.pxe [-] PXE deploy started for instance 
eb25c949-e1d4-48a5-a041-8aa8a53becb5
  /var/log/upstart/nova-compute.log:2013-05-15 01:19:50,442.442 23016 INFO 
nova.virt.baremetal.pxe [-] PXE deploy completed for instance 
eb25c949-e1d4-48a5-a041-8aa8a53becb5
  /var/log/upstart/nova-compute.log:2013-05-15 03:02:45,763.763 23016 AUDIT 
nova.compute.manager [req-4bb96dfc-7a3e-44b5-86cd-a5e9e28b7064 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] [instance: 
eb25c949-e1d4-48a5-a041-8aa8a53becb5] Terminating instance
  /var/log/upstart/nova-compute.log:2013-05-15 03:02:51,784.784 23016 ERROR 
nova.virt.baremetal.driver [req-4bb96dfc-7a3e-44b5-86cd-a5e9e28b7064 
87b44714e906420a9e6a07f6835b5b61 c30643483cd849a6b61e98c6ccc03a66] Error from 
baremetal driver during destroy: Baremetal power manager failed to stop node 
for instance u'eb25c949-e1d4-48a5-a041-8aa8a53becb5'
  /var/log/upstart/nova-compute.log:2013-05-15 03:02:52,062.062 23016 TRACE 
nova.openstack.common.rpc.amqp InstancePowerOffFailure: Baremetal power manager 
failed to stop node for instance u'eb25c949-e1d4-48a5-a041-8aa8a53becb5'
  /var/log/upstart/nova-compute.log:2013-05-15 03:12:20,978.978 23016 INFO 
nova.compute.manager [-] [instance: eb25c949-e1d4-48a5-a041-8aa8a53becb5] 
During sync_power_state the instance has a pending task. Skip.
  /var/log/upstart/nova-compute.log:2013-05-15 03:22:31,519.519 23016 INFO 
nova.compute.manager [-] [instance: eb25c949-e1d4-48a5-a041-8aa8a53becb5] 
During sync_power_state the instance has a pending task. Skip.
  /var/log/upstart/nova-compute.log:2013-05-15 03:32:42,251.251 23016 INFO 
nova.compute.manager [-] [instance: eb25c949-e1d4-48a5-a041-8aa8a53becb5] 
During sync_power_state the instance has a pending task. Skip.
  /var/log/upstart/nova-compute.log:2013-05-15 03:42:52,216.216 23016 INFO 
nova.compute.manager [-] [instance: eb25c949-e1d4-48a5-a041-8aa8a53becb5] 
During sync_power_state the instance has a pending task. Skip.
  /var/log/upstart/nova-compute.log:2013-05-15 03:53:03,285.285 23016 INFO 
nova.compute.manager [-] [instance: eb25c949-e1d4-48a5-a041-8aa8a53becb5] 
During sync_power_state the instance has a pending task. Skip

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1183534] Re: libvirt domain creation errors not debuggable from logs alone

2013-10-01 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1183534

Title:
  libvirt domain creation errors not debuggable from logs alone

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  in nova/virt/libvirt/driver.py line 2410 - the _create_domain
  function, The XML passed in is used :

   if xml:
  domain = self._conn.defineXML(xml)

  but when an error happens, the value of the xml is not reported, which
  makes reproducing or inspecting the xml for correctness impossible.

  Right now we're debugging an error like :

  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 650, in
  createWithFlags\nif ret == -1: raise libvirtError
  (\'virDomainCreateWithFlags() failed\', dom=self)\n', u'libvirtError:
  internal error Process exited while reading console log output:
  Warning: option deprecated, use lost_tick_policy property of kvm-pit
  instead.\nchardev: opening backend "file" failed\n\n'

  and having the XML would allow reproduction and analysis.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1183534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178919] Re: instances get stuck in 'BUILDING' sometimes

2013-10-01 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178919

Title:
  instances get stuck in 'BUILDING' sometimes

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  We booted 10 instances at once and 1 had this happen; 20 at once and
  3; 30 at once and 5.

  Logs from the instance UUID in the original description
  /var/log/upstart/nova-compute.log.1.gz:2013-05-11 06:27:15,349.349 23016 INFO 
nova.compute.manager [-] [instance: 0a171cbe-0f3c-40d5-ae8d-606f1dde41ce] 
During sync_power_state the instance has a pending task. Skip.
  ubuntu@foo:~$ date
  Sat May 11 06:48:44 UTC 2013

  we think this is fallout from nodes that were powered on for some
  reason when the deploy started. We're going to add a hard off into the
  code path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1178919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1111572] Re: quantum subnet-update can't update allocation-pool

2013-11-07 Thread Robert Collins
** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/572

Title:
  quantum subnet-update can't update allocation-pool

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the upadte of the allocation-pool doesn't work :

  # quantum subnet-create --allocation-pool start=10.0.13.200,end=10.0.13.254 
net3 10.0.13.0/24
  Created a new subnet:
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {"start": "10.0.13.200", "end": "10.0.13.254"} |
  | cidr | 10.0.13.0/24   |
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.13.1  |
  | host_routes  ||
  | id   | cb4f7e7e-3152-4c5b-a620-9b9bd5b75ac3   |
  | ip_version   | 4  |
  | name ||
  | network_id   | d76bff4a-dff1-436c-845b-b72c9bce8a96   |
  | tenant_id| 57ffb85101824a73ae4872ab0c6780cf   |
  +--++

  # quantum subnet-update cb4f7e7e-3152-4c5b-a620-9b9bd5b75ac3 
--allocation-pool start=10.0.13.210,end=10.0.13.254  
  Updated subnet: cb4f7e7e-3152-4c5b-a620-9b9bd5b75ac3

  
  # quantum subnet-show cb4f7e7e-3152-4c5b-a620-9b9bd5b75ac3
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {"start": "10.0.13.200", "end": "10.0.13.254"} |
  | cidr | 10.0.13.0/24   |
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.13.1  |
  | host_routes  ||
  | id   | cb4f7e7e-3152-4c5b-a620-9b9bd5b75ac3   |
  | ip_version   | 4  |
  | name ||
  | network_id   | d76bff4a-dff1-436c-845b-b72c9bce8a96   |
  | tenant_id| 57ffb85101824a73ae4872ab0c6780cf   |
  +--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/572/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251784] Re: nova+neutron scheduling error: Connection to neutron failed: Maximum attempts reached

2013-11-24 Thread Robert Collins
** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251784

Title:
  nova+neutron scheduling error: Connection to neutron failed: Maximum
  attempts reached

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  VMs are failing to schedule with the following error

  2013-11-15 20:50:21.405 ERROR nova.scheduler.filter_scheduler [req-
  d2c26348-53e6-448a-8975-4f22f4e89782 demo demo] [instance: c8069c13
  -593f-48fb-aae9-198961097eb2] Error from last host: devstack-precise-
  hpcloud-az3-662002 (node devstack-precise-hpcloud-az3-662002):
  [u'Traceback (most recent call last):\n', u'  File
  "/opt/stack/new/nova/nova/compute/manager.py", line 1030, in
  _build_instance\nset_access_ip=set_access_ip)\n', u'  File
  "/opt/stack/new/nova/nova/compute/manager.py", line 1439, in _spawn\n
  LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n',
  u'  File "/opt/stack/new/nova/nova/compute/manager.py", line 1436, in
  _spawn\nblock_device_info)\n', u'  File
  "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2100, in
  spawn\nadmin_pass=admin_password)\n', u'  File
  "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2451, in
  _create_image\ncontent=files, extra_md=extra_md,
  network_info=network_info)\n', u'  File
  "/opt/stack/new/nova/nova/api/metadata/base.py", line 165, in
  __init__\n
  ec2utils.get_ip_info_for_instance_from_nw_info(network_info)\n', u'
  File "/opt/stack/new/nova/nova/api/ec2/ec2utils.py", line 149, in
  get_ip_info_for_instance_from_nw_info\nfixed_ips =
  nw_info.fixed_ips()\n', u'  File
  "/opt/stack/new/nova/nova/network/model.py", line 368, in
  _sync_wrapper\nself.wait()\n', u'  File
  "/opt/stack/new/nova/nova/network/model.py", line 400, in wait\n
  self[:] = self._gt.wait()\n', u'  File "/usr/local/lib/python2.7/dist-
  packages/eventlet/greenthread.py", line 168, in wait\nreturn
  self._exit_event.wait()\n', u'  File "/usr/local/lib/python2.7/dist-
  packages/eventlet/event.py", line 120, in wait\n
  current.throw(*self._exc)\n', u'  File "/usr/local/lib/python2.7/dist-
  packages/eventlet/greenthread.py", line 194, in main\nresult =
  function(*args, **kwargs)\n', u'  File
  "/opt/stack/new/nova/nova/compute/manager.py", line 1220, in
  _allocate_network_async\ndhcp_options=dhcp_options)\n', u'  File
  "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 359, in
  allocate_for_instance\nnw_info =
  self._get_instance_nw_info(context, instance, networks=nets)\n', u'
  File "/opt/stack/new/nova/nova/network/api.py", line 49, in wrapper\n
  res = f(self, context, *args, **kwargs)\n', u'  File
  "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 458, in
  _get_instance_nw_info\nnw_info =
  self._build_network_info_model(context, instance, networks)\n', u'
  File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 1022,
  in _build_network_info_model\nsubnets =
  self._nw_info_get_subnets(context, port, network_IPs)\n', u'  File
  "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 924, in
  _nw_info_get_subnets\nsubnets =
  self._get_subnets_from_port(context, port)\n', u'  File
  "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 1066, in
  _get_subnets_from_port\ndata =
  neutronv2.get_client(context).list_ports(**search_opts)\n', u'  File
  "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py",
  line 111, in with_params\nret = self.function(instance, *args,
  **kwargs)\n', u'  File "/opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py", line 306, in list_ports\n
  **_params)\n', u'  File "/opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py", line 1250, in list\n
  for r in self._pagination(collection, path, **params):\n', u'  File
  "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py",
  line 1263, in _pagination\nres = self.get(path, params=params)\n',
  u'  File "/opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py", line 1236, in get\n
  headers=headers, params=params)\n', u'  File "/opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py", line 1228, in
  retry_request\nraise exceptions.ConnectionFailed(reason=_("Maximum
  attempts reached"))\n', u'ConnectionFailed: Connection to neutron
  failed: Maximum attempts reached\n']

  
  Connection to neutron failed: Maximum attempts reached

  http://logs.openstack.org/96/56496/1/gate/gate-tempest-devstack-vm-
  neutron-
  isolated/8df6c6c/logs/screen-n-sch.txt.gz#_2013-11-15_20_50_21_405

  
  logstash query: "Connection to neutron failed: Maximum attempts reached"  AND 

[Yahoo-eng-team] [Bug 1255131] Re: Can't boot instance on cd-overcloud

2013-11-26 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255131

Title:
  Can't boot instance on cd-overcloud

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Error while trying to boot the demo VM on the overcloud

  Happening on cd-overcloud and local devtest

  2013-11-26 10:55:57,790.790 2785 ERROR nova.compute.manager [-] Instance 
failed network setup after 1 attempt(s)
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager Traceback (most 
recent call last):
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 1219, in _allocate_network_async
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 295, in allocate_for_instance
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager 
security_group_id=security_group)
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager 
SecurityGroupNotFound: Security group default not found.
  2013-11-26 10:55:57,790.790 2785 TRACE nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238311] Re: Make nova baremetal support ephemeral disks

2013-12-01 Thread Robert Collins
** Changed in: nova
   Status: In Progress => Fix Committed

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238311

Title:
  Make nova baremetal support ephemeral disks

Status in Ironic (Bare Metal Provisioning):
  Triaged
Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Right now nova baremetal just makes root and swap partitions, we want
  it to also create 'ephemeral partitions.' This is the first step in
  being able to rebuild a baremetal instance with a new image but
  preserve the data on the ephemeral disk.

  We want the partition order to be: ephemeral swap root
  We want root last, is that we put root at the start to stop it resizing so we 
could use all the space for a new  partition, but this use of ephemeral makes 
that explicit. so instead having the extra space to allow root to resize will 
make it safer to do takeovernode with slightly larger images.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1238311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257295] Re: openstack is full of misspelled words

2015-05-05 Thread Robert Collins
** Changed in: pbr
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257295

Title:
  openstack is full of misspelled words

Status in OpenStack Telemetry (Ceilometer):
  Invalid
Status in Designate:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in Python Build Reasonableness:
  Fix Released
Status in Designate Client:
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  List of known misspellings

  http://paste.openstack.org/show/54354

  Generated with:
pip install misspellings
git ls-files | grep -v locale | misspellings -f -

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260495] Re: Setting autodoc_tree_index_modules makes documentation builds fail

2015-05-05 Thread Robert Collins
** Changed in: pbr
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260495

Title:
  Setting autodoc_tree_index_modules makes documentation builds fail

Status in OpenStack Identity (Keystone):
  Confirmed
Status in Python Build Reasonableness:
  Fix Released
Status in Python client library for Keystone:
  Confirmed

Bug description:
  The arguments originally being passed into sphinx.apidoc specified '.'
  as the path to index. Unfortunately this includes the setup.py module.
  Sphinx dies while trying to process the setup.rst likely because the
  setup.py module calls setuptools.setup() when imported causing some
  sort of recursion. The final result is something like:

2013-12-08 21:08:12.088 | reading sources... [ 80%] api/setup
2013-12-08 21:08:12.100 | /usr/lib/python2.7/distutils/dist.py:267: 
UserWarning: Unknown distribution option: 'setup_requires'
2013-12-08 21:08:12.101 |   warnings.warn(msg)
2013-12-08 21:08:12.102 | /usr/lib/python2.7/distutils/dist.py:267: 
UserWarning: Unknown distribution option: 'pbr'
2013-12-08 21:08:12.102 |   warnings.warn(msg)
2013-12-08 21:08:12.103 | usage: setup.py [global_opts] cmd1 [cmd1_opts] 
[cmd2 [cmd2_opts] ...]
2013-12-08 21:08:12.103 |or: setup.py --help [cmd1 cmd2 ...]
2013-12-08 21:08:12.104 |or: setup.py --help-commands
2013-12-08 21:08:12.104 |or: setup.py cmd --help
2013-12-08 21:08:12.104 | 
2013-12-08 21:08:12.105 | error: invalid command 'build_sphinx'
2013-12-08 21:08:12.622 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-docs/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1260495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468976] Re: requirements file require higher pbr version

2015-07-01 Thread Robert Collins
*** This bug is a duplicate of bug 1468808 ***
https://bugs.launchpad.net/bugs/1468808

** This bug is no longer a duplicate of bug 1468339
   pip install requirements error with python_version=='2.7'
** This bug has been marked a duplicate of bug 1468808
   stack.sh downgrades pbr

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1468976

Title:
  requirements file require higher pbr version

Status in OpenStack Identity (Keystone):
  Fix Committed

Bug description:
  currently requirements.txt have lines
  pbr<2.0,>=0.11
  keystonemiddleware>=1.5.0
  while if you are using keystonemiddleware-1.6.1, which requires pbr<1.0, 
>=0.6, so only pbr-0.11.0 can be installed. But pbr-0.11.0 is NOT compatible 
with line:
  python-ldap>=2.4;python_version=='2.7'
  It is the reason of bug https://bugs.launchpad.net/devstack/+bug/1468339

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1468976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292105] Re: CI failed pinging overcloud instance

2014-06-19 Thread Robert Collins
** Also affects: neutron
   Importance: Undecided
   Status: New

** Summary changed:

- CI failed pinging overcloud instance
+ ovs tunnel state not syncing (failure pinging overcloud instance)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1292105

Title:
  ovs tunnel state not syncing (failure pinging overcloud instance)

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  I saw this in a recent CI overcloud run:
  http://logs.openstack.org/66/74866/8/check-tripleo/check-tripleo-
  overcloud-precise/aa490f1/console.html

  2014-03-12 20:01:46.509 | Timing out after 300 seconds:
  2014-03-12 20:01:46.509 | COMMAND=ping -c 1 192.0.2.46
  2014-03-12 20:01:46.509 | OUTPUT=PING 192.0.2.46 (192.0.2.46) 56(84) bytes of 
data.
  2014-03-12 20:01:46.509 | From 192.0.2.46 icmp_seq=1 Destination Host 
Unreachable

  It appears as though everything ran fine up until it tried to ping the
  booted overcloud instance.  I'm fairly certain it has nothing to do
  with my change, so I wanted to open a bug to track it in case anyone
  else runs into a similar problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1292105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338403] [NEW] circular reference detected with exception

2014-07-06 Thread Robert Collins
Public bug reported:

2014-07-07 02:10:08.727 10283 ERROR oslo.messaging.rpc.dispatcher 
[req-54c68afe-91a8-4a99-86e8-785c0abf7688 ] Exception during message handling: 
Circular reference detected
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 88, in wrapped
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher payload)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 71, in wrapped
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 336, in decorated_function
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py",
 line 437, in __exit__
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
exc_tb=exc_tb)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/base.py", 
line 142, in wrapper
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher args, 
kwargs)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/conductor/rpcapi.py",
 line 355, in object_class_action
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
objver=objver, args=args, kwargs=kwargs)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/transport.py",
 line 90, in _send
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 412, in send
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 385, in _send
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher msg = 
rpc_common.serialize_msg(msg)
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/common.py",
 line 462, in serialize_msg
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
_MESSAGE_KEY: jsonutils.dumps(raw_msg)}
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/openstack/common/jsonutils.py",
 line 164, in dumps
2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
json.dumps(value, default=default, **kwargs)

[Yahoo-eng-team] [Bug 1338403] Re: circular reference detected with exception

2014-07-06 Thread Robert Collins
I instrumented the error:
value is 
'{\'_context_roles\': [u\'_member_\', u\'admin\'],
  \'_msg_id\': \'4dfe58bf8be54f9cb26e669bf626d20c\', 
  \'_context_request_id\': u\'req-e559c418-f387-4b2e-b7e2-27d4809e7b4e\', 
  \'_context_service_catalog\': [], 
 \'args\': {\'objver\': \'1.0\', 
  \'objmethod\': \'event_finish_with_failure\',
  \'args\': (\'e2019ea2-bd2c-479a-8908-75abbeb4833c\', 
\'compute_terminate_instance\'),
 \'objname\': \'InstanceActionEvent\',
\'kwargs\': {\'exc_val\': NovaException(u"Error destroying the 
instance on node 9e003bf8-9890-4566-8307-d4a250efd901. Provision state still 
\'deleting\'.",),
   \'exc_tb\': }}, 
   \'_unique_id\': \'91ad34be58b74b4cb161d5961a706163\', 
   \'_context_user\': u\'d2caebd29b6f448e8fac0abd24b25810\',
   \'_context_user_id\': u\'d2caebd29b6f448e8fac0abd24b25810\', 
  \'_context_project_name\': u\'admin\', 
  \'_context_read_deleted\': u\'no\',
   \'_reply_q\': \'reply_d89d8e5285fc4d1f8d5ae5a5a7b5fe22\', 
  \'_context_auth_token\': u\'MIIPnrq\', 
  \'_context_tenant\': u\'1767f7b0fe0d4ca39f4f9caaf884b630\', 
  \'_context_instance_lock_checked\': False,
   \'_context_is_admin\': True,
  \'version\': \'2.0\',
   \'_context_project_id\': u\'1767f7b0fe0d4ca39f4f9caaf884b630\',
   \'_context_timestamp\': \'2014-07-07T05:41:50.904570\', 
  \'_context_user_name\': u\'admin\', 
  \'method\': \'object_class_action\',
   \'_context_remote_address\': u\'192.168.122.1\'}'

Its not obvious to me where the circular reference is - but I strongly
suspect its in the exc_tb.

** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338403

Title:
  circular reference detected with exception

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  2014-07-07 02:10:08.727 10283 ERROR oslo.messaging.rpc.dispatcher 
[req-54c68afe-91a8-4a99-86e8-785c0abf7688 ] Exception during message handling: 
Circular reference detected
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 88, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 71, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 336, in decorated_function
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py",
 line 437, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
exc_tb=exc_tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/base.py", 
line 142, in wrapper
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher args, 
kwargs)
  2014-07-07 02:10:08.7

[Yahoo-eng-team] [Bug 1231351] Re: nova-bm keeps a copy of baremetal image after deployment

2014-07-08 Thread Robert Collins
This is fixed for Ironic but we won't be trying to fix it for nova-bm -
thats deprecated.

** Changed in: tripleo
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1231351

Title:
  nova-bm keeps a copy of baremetal image after deployment

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Triaged
Status in tripleo - openstack on openstack:
  Won't Fix

Bug description:
  this is unneeded, it would be equivalent to nova vm keeping a pristine
  copy of every image deployed - glance is the right place to do that.
  This pushes the disk requirements for bare metal hypervisors wy
  up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1231351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341268] [NEW] mac_address port attribute is read-only

2014-07-13 Thread Robert Collins
Public bug reported:

Ironic uses Neutron for IPAM - IP address allocation and DHCP, same as
nova with e.g. KVM or Xen. Unlike a virtual machine hypervisor, Ironics
real machines sometimes develop hardware faults, and parts need to be
replaced - and when that happens the port MAC changes. Having the port's
mac address be read-only makes this much more intrusive than it would
otherwise be - we have to tear down and rebuild the whole machine,
rather than just replace the card and tell Neutron the new MAC.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341268

Title:
  mac_address port attribute is read-only

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Ironic uses Neutron for IPAM - IP address allocation and DHCP, same as
  nova with e.g. KVM or Xen. Unlike a virtual machine hypervisor,
  Ironics real machines sometimes develop hardware faults, and parts
  need to be replaced - and when that happens the port MAC changes.
  Having the port's mac address be read-only makes this much more
  intrusive than it would otherwise be - we have to tear down and
  rebuild the whole machine, rather than just replace the card and tell
  Neutron the new MAC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341420] [NEW] gap between scheduler selection and claim causes spurious failures when the instance is the last one to fit

2014-07-13 Thread Robert Collins
Public bug reported:

There is a race between the scheduler in select_destinations, which
selects a set of hosts, and the nova compute manager, which claims
resources on those hosts when building the instance. The race is
particularly noticable with Ironic, where ever request will consume a
full host, but can turn up on libvirt etc too. Multiple schedulers will
likely exacerbate this too unless they are in a version of python with
randomised dictionary ordering, in which case they will make it better
:).


I've put https://review.openstack.org/106677 up to remove a comment which comes 
from before we introduced this race.

One mitigating aspect to the race in the filter scheduler _schedule
method attempts to randomly select hosts to avoid returning the same
host in repeated requests, but the default minimum set it selects from
is size 1 - so when heat requests a single instance, the same candidate
is chosen every time. Setting that number higher can avoid all
concurrent requests hitting the same host, but it will still be a race,
and still likely to fail fairly hard at near-capacity situations (e.g.
deploying all machines in a cluster with Ironic and Heat).

Folk wanting to reproduce this: take a decent size cloud - e.g. 5 or 10
hypervisor hosts (KVM is fine). Deploy up to 1 VM left of capacity on
each hypervisor. Then deploy a bunch of VMs one at a time but very close
together - e.g. use the python API to get cached keystone credentials,
and boot 5 in a loop.

If using Ironic you will want https://review.openstack.org/106676 to let
you see which host is being returned from the selection.

Possible fixes:
 - have the scheduler be a bit smarter about returning hosts - e.g. track 
destination selection counts since the last refresh and weight hosts by that 
count as well
 - reinstate actioning claims into the scheduler, allowing the audit to correct 
any claimed-but-not-started resource counts asynchronously
 - special case the retry behaviour if there are lots of resources available 
elsewhere in the cluster.

Stats wise, I just testing a 29 instance deployment with ironic and a
heat stack, with 45 machines to deploy onto (so 45 hosts in the
scheduler set) and 4 failed with this race - which means they recheduled
and failed 3 times each - or 12 cases of scheduler racing *at minimum*.

background chat

15:43 < lifeless> mikal: around? I need to sanity check something
15:44 < lifeless> ulp, nope, am sure of it. filing a bug.
15:45 < mikal> lifeless: ok
15:46 < lifeless> mikal: oh, you're here, I will run it past you :)
15:46 < lifeless> mikal: if you have ~5m
15:46 < mikal> Sure
15:46 < lifeless> so, symptoms
15:46 < lifeless> nova boot <...> --num-instances 45 -> works fairly reliably. 
Some minor timeout related things to fix but nothing dramatic.
15:47 < lifeless> heat create-stack <...> with a stack with 45 instances in it 
-> about 50% of instances fail to come up
15:47 < lifeless> this is with Ironic
15:47 < mikal> Sure
15:47 < lifeless> the failure on all the instances is the retry-three-times 
failure-of-death
15:47 < lifeless> what I believe is happening is this
15:48 < lifeless> the scheduler is allocating the same weighed list of hosts 
for requests that happen close enough together
15:49 < lifeless> and I believe its able to do that because the target hosts 
(from select_destinations) need to actually hit the compute node manager and 
have 
15:49 < lifeless> with rt.instance_claim(context, instance, 
limits):
15:49 < lifeless> happen in _build_and_run_instance
15:49 < lifeless> before the resource usage is assigned
15:49 < mikal> Is heat making 45 separate requests to the nova API?
15:49 < lifeless> eys
15:49 < lifeless> yes
15:49 < lifeless> thats the key difference
15:50 < lifeless> same flavour, same image
15:50 < openstackgerrit> Sam Morrison proposed a change to openstack/nova: 
Remove cell api overrides for lock and unlock  
https://review.openstack.org/89487
15:50 < mikal> And you have enough quota for these instances, right?
15:50 < lifeless> yes
15:51 < mikal> I'd have to dig deeper to have an answer, but it sure does seem 
worth filing a bug for
15:51 < lifeless> my theory is that there is enough time between 
select_destinations in the conductor, and _build_and_run_instance in compute 
for another request to come in the front door and be scheduled to the same host
15:51 < mikal> That seems possible to me
15:52 < lifeless> I have no idea right now about how to fix it (other than to 
have the resources provisionally allocated by the scheduler before it sends a 
reply), but I am guessing that might be contentious
15:52 < mikal> I can't instantly think of a fix though -- we've avoided queue 
like behaviour for scheduling
15:52 < mikal> How big is the clsuter compared with 45 instances?
15:52 < mikal> Is it approximately the same size as that?
15:52 < lifeless> (by provisionally allocated, I mean 'claim them and let the 
audit in 60 seconds fix it up if they are not actually 

[Yahoo-eng-team] [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC

2014-07-16 Thread Robert Collins
Looking at the nova code, it only throws it away if the error is 
except (exception.InstanceNotFound,
exception.UnexpectedDeletingTaskStateError):

and not on the other code paths. That makes this a nova or nova-ironic-
driver-interaction bug: adding a nova task.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342919

Title:
  instances rescheduled after building network info do not update the
  MAC

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Triaged
Status in OpenStack Compute (Nova):
  New

Bug description:
  This is weird - Ironic has used the mac from a different node (which
  quite naturally leads to failures to boot!)

  nova list | grep spawn
  | 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | 
ci-overcloud-NovaCompute3-zmkjp5aa6vgf  | BUILD  | spawning   | NOSTATE | 
ctlplane=10.10.16.137 |

   nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv
   | OS-EXT-SRV-ATTR:hypervisor_hostname  | 
b07295ee-1c09-484c-9447-10b9efee340c |

   neutron port-list | grep 137
   | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d || 
78:e7:d1:23:90:0d | {"subnet_id": "a6ddb35e-305e-40f1-9450-7befc8e1af47", 
"ip_address": "10.10.16.137"} |

  ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait
   | provision_state| wait call-back
 |

  ironic port-list | grep 78:e7:d1:23:90:0d  # from neutron
  | 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d |

  ironic port-show 33ab97c0-3de9-458a-afb7-8252a981
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b  |  # 
Ruh-roh, wrong node!
  | uuid   | 33ab97c0-3de9-458a-afb7-8252a981b37a  |
  | extra  | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} |
  | created_at | 2014-07-08T23:09:16+00:00 |
  | updated_at | 2014-07-16T01:23:23+00:00 |
  | address| 78:e7:d1:23:90:0d |
  ++---+

  
  ironic port-list | grep 78:e7:d1:23:9b:1d  # This is the MAC my hardware list 
says the node should have
  | caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
  # ironic port-show caba5b36-f518-43f2-84ed-0bc516cc
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | b07295ee-1c09-484c-9447-10b9efee340c  |  # 
and tada right node
  | uuid   | caba5b36-f518-43f2-84ed-0bc516cc89df  |
  | extra  | {u'vif_port_id': u'272f2413-0309-4e8b-9a6d-9cb6fdbe978d'} |
  | created_at | 2014-07-08T23:08:26+00:00 |
  | updated_at | 2014-07-16T19:07:56+00:00 |
  | address| 78:e7:d1:23:9b:1d |
  ++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1342919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC

2014-07-17 Thread Robert Collins
** Changed in: nova
   Importance: Undecided => High

** Tags added: ironic

** No longer affects: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342919

Title:
  instances rescheduled after building network info do not update the
  MAC

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  This is weird - Ironic has used the mac from a different node (which
  quite naturally leads to failures to boot!)

  nova list | grep spawn
  | 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | 
ci-overcloud-NovaCompute3-zmkjp5aa6vgf  | BUILD  | spawning   | NOSTATE | 
ctlplane=10.10.16.137 |

   nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv
   | OS-EXT-SRV-ATTR:hypervisor_hostname  | 
b07295ee-1c09-484c-9447-10b9efee340c |

   neutron port-list | grep 137
   | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d || 
78:e7:d1:23:90:0d | {"subnet_id": "a6ddb35e-305e-40f1-9450-7befc8e1af47", 
"ip_address": "10.10.16.137"} |

  ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait
   | provision_state| wait call-back
 |

  ironic port-list | grep 78:e7:d1:23:90:0d  # from neutron
  | 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d |

  ironic port-show 33ab97c0-3de9-458a-afb7-8252a981
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b  |  # 
Ruh-roh, wrong node!
  | uuid   | 33ab97c0-3de9-458a-afb7-8252a981b37a  |
  | extra  | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} |
  | created_at | 2014-07-08T23:09:16+00:00 |
  | updated_at | 2014-07-16T01:23:23+00:00 |
  | address| 78:e7:d1:23:90:0d |
  ++---+

  
  ironic port-list | grep 78:e7:d1:23:9b:1d  # This is the MAC my hardware list 
says the node should have
  | caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
  # ironic port-show caba5b36-f518-43f2-84ed-0bc516cc
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | b07295ee-1c09-484c-9447-10b9efee340c  |  # 
and tada right node
  | uuid   | caba5b36-f518-43f2-84ed-0bc516cc89df  |
  | extra  | {u'vif_port_id': u'272f2413-0309-4e8b-9a6d-9cb6fdbe978d'} |
  | created_at | 2014-07-08T23:08:26+00:00 |
  | updated_at | 2014-07-16T19:07:56+00:00 |
  | address| 78:e7:d1:23:9b:1d |
  ++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346494] [NEW] l3 agent gw port missing vlan tag for vlan provider network

2014-07-21 Thread Robert Collins
Public bug reported:

Hi, I have a provider network with my floating NAT range on it and a vlan 
segmentation id:
neutron net-show ext-net
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| f8ea424f-fcbe-4d57-9f17-5c576bf56e60 |
| name  | ext-net  |
| provider:network_type | vlan |
| provider:physical_network | datacentre   |
| provider:segmentation_id  | 25   |
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   | 391829e1-afc5-4280-9cd9-75f554315e82 |
| tenant_id | e23f57e1d6c54398a68354adf522a36d |
+---+--+

My ovs agent config:

cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
[DATABASE]
sql_connection = mysql://.@localhost/ovs_neutron?charset=utf8

reconnect_interval = 2

[OVS]
bridge_mappings = datacentre:br-ex
network_vlan_ranges = datacentre

tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.10.16.151


[AGENT]
polling_interval = 2

[SECURITYGROUP]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
root@ci-overcloud-controller0-ydt5on7wojsb:~# 

But, the thing is, the port created in ovs is missing the tag:
Bridge br-ex
Port "qg-d8c27507-14"
Interface "qg-d8c27507-14"
type: internal

And we (As expected) are seeing tagged frames in tcpdump:
19:37:16.107288 20:fd:f1:b6:f5:16 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 68: vlan 25, p 0, ethertype ARP, Request who-has 138.35.77.67 
tell 138.35.77.1, length 50

rather than untagged frames for the vlan 25.

Running ovs-vsctl set port qg-d8c27507-14 tag=25 makes things work, but
the agent should do this, no?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346494

Title:
  l3 agent gw port missing vlan tag for vlan provider network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi, I have a provider network with my floating NAT range on it and a vlan 
segmentation id:
  neutron net-show ext-net
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| f8ea424f-fcbe-4d57-9f17-5c576bf56e60 |
  | name  | ext-net  |
  | provider:network_type | vlan |
  | provider:physical_network | datacentre   |
  | provider:segmentation_id  | 25   |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 391829e1-afc5-4280-9cd9-75f554315e82 |
  | tenant_id | e23f57e1d6c54398a68354adf522a36d |
  +---+--+

  My ovs agent config:

  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
  [DATABASE]
  sql_connection = mysql://.@localhost/ovs_neutron?charset=utf8

  reconnect_interval = 2

  [OVS]
  bridge_mappings = datacentre:br-ex
  network_vlan_ranges = datacentre

  tenant_network_type = gre
  tunnel_id_ranges = 1:1000
  enable_tunneling = True
  integration_bridge = br-int
  tunnel_bridge = br-tun
  local_ip = 10.10.16.151

  
  [AGENT]
  polling_interval = 2

  [SECURITYGROUP]
  firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  root@ci-overcloud-controller0-ydt5on7wojsb:~# 

  But, the thing is, the port created in ovs is missing the tag:
  Bridge br-ex
  Port "qg-d8c27507-14"
  Interface "qg-d8c27507-14"
  type: internal

  And we (As expected) are seeing tagged frames in tcpdump:
  19:37:16.107288 20:fd:f1:b6:f5:16 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 68: vlan 25, p 0, ethertype ARP, Request who-has 138.35.77.67 
tell 138.35.77.1, length 50

  rather than unt

[Yahoo-eng-team] [Bug 1353953] Re: guest instance ping timeout on overcloud (Upstream CI failure)

2014-08-07 Thread Robert Collins
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353953

Title:
  Race between neutron-server and l3-agent

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  http://logs.openstack.org/58/87758/24/check-tripleo/check-tripleo-
  novabm-overcloud-f20-nonha/848e217/console.html

  2014-08-07 10:35:52.753 | + wait_for 30 10 ping -c 1 192.0.2.46
  2014-08-07 10:42:23.169 | Timing out after 300 seconds:
  2014-08-07 10:42:23.169 | COMMAND=ping -c 1 192.0.2.46
  2014-08-07 10:42:23.169 | OUTPUT=PING 192.0.2.46 (192.0.2.46) 56(84) bytes of 
data.
  2014-08-07 10:42:23.169 | From 192.0.2.46 icmp_seq=1 Destination Host 
Unreachable
  2014-08-07 10:42:23.169 | 
  2014-08-07 10:42:23.169 | --- 192.0.2.46 ping statistics 

  looks like neutron dhcp agent issues

  http://logs.openstack.org/58/87758/24/check-tripleo/check-tripleo-
  novabm-overcloud-f20-nonha/848e217/logs/overcloud-controller0_logs
  /neutron-dhcp-agent.txt.gz

  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv sudo[14027]: neutron : 
TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6 ip link set tap7e59533d-32 up
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
2014-08-07 10:31:10.476 12316 ERROR neutron.agent.linux.utils 
[req-fdebecfd-81d2-48c2-8765-7545e1e9dbb1 None]
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6', 'ip', 
'link', 'set', 'tap7e59533d-32', 'up']
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Exit code: 1
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stdout: ''
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stderr: 'Cannot open network namespace 
"qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6": No such file or directory\n'
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv sudo[14032]: neutron : 
TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6 ip -o link show tap7e59533d-32
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
2014-08-07 10:31:10.596 12316 ERROR neutron.agent.linux.utils 
[req-fdebecfd-81d2-48c2-8765-7545e1e9dbb1 None]
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6', 'ip', 
'-o', 'link', 'show', 'tap7e59533d-32']
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Exit code: 1
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stdout: ''
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stderr: 'Cannot open network namespace 
"qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6": No such file or directory\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355649] [NEW] L3 scheduler additions to support DVR migration fails

2014-08-12 Thread Robert Collins
Public bug reported:

I have a running cloud (off of trunk) which I'm upgrading, and is failing to 
apply neutron migrations:
+ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade 31d7f831a591 -> 5589aa32bf80, L3 
scheduler additions to support DVR
Traceback (most recent call last):
  File "/usr/local/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/cli.py",
 line 175, in main
CONF.command.func(config, CONF.command.name)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/cli.py",
 line 85, in do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/cli.py",
 line 63, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/command.py",
 line 125, in upgrade
script.run_env()
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/script.py", 
line 203, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/util.py", 
line 215, in load_python_file
module = load_module_py(module_id, path)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/compat.py", 
line 58, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 125, in 
run_migrations_online()
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 109, in run_migrations_online
options=build_options())
  File "", line 7, in run_migrations
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/environment.py",
 line 689, in run_migrations
self.get_context().run_migrations(**kw)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/migration.py",
 line 263, in run_migrations
change(**kw)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/5589aa32bf80_l3_dvr_scheduler.py",
 line 54, in upgrade
sa.PrimaryKeyConstraint('router_id')
  File "", line 7, in create_table
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/operations.py",
 line 713, in create_table
self._table(name, *columns, **kw)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 149, in create_table
self._exec(schema.CreateTable(table))
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 76, in _exec
conn.execute(construct, *multiparams, **params)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 729, in execute
return meth(self, multiparams, params)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 69, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 783, in _execute_ddl
compiled
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 958, in _execute_context
context)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1160, in _handle_dbapi_exception
exc_info
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 951, in _execute_context
context)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 436, in do_execute
cursor.execute(statement, parameters)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/MySQLdb/cursors.py",
 line 205, in execute
self.errorhandler(self, exc, value)
  File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/MySQLdb/connections.py",
 line 36, in defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1050, "Table 
'csnat_l3_agent_bindings' already exists") '\nCREATE TABLE 
csnat_l3_agent_bindings (\n\trouter_id VARCHAR(36) NOT NULL, \n\tl3_agent_id 
VARCHAR(36) NOT NULL, \n\thost_id VARCHAR(255), \n\tcsnat_gw_po

[Yahoo-eng-team] [Bug 1361924] Re: python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Robert Collins
** Also affects: subunit
   Importance: Undecided
   Status: New

** Changed in: subunit
   Status: New => Triaged

** Changed in: subunit
   Importance: Undecided => Critical

** Changed in: subunit
 Assignee: (unassigned) => Robert Collins (lifeless)

** Changed in: subunit
Milestone: None => next

** Changed in: subunit
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in OpenStack Identity (Keystone):
  Invalid
Status in SubUnit:
  Fix Released
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U "python-subunit>=0.0.20"
  Downloading/unpacking python-subunit>=0.0.20
  ...
  copying and adjusting filters/subunit-tags -> build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261644] [NEW] baremetal deploys can leak iscsi sessions

2013-12-17 Thread Robert Collins
Public bug reported:

There is a hole in the error handling somewhere - failed deploys /
daemon restarts can leak sessions:

iscsiadm -m node
10.10.16.173:3260,1 iqn-b37386b2-fa29-4836-9ee7-f638dfca1ac9
10.10.16.177:3260,1 iqn-7b9e5f09-7134-4e7f-92b8-31347456dd9f
10.10.16.178:3260,1 iqn-5aa23554-913b-448d-be97-caae79b75a1b
10.10.16.181:3260,1 iqn-a79e34e8-bca6-46e7-8f3c-ae0e6306a13e
10.10.16.175:3260,1 iqn-627b7f63-8018-46c0-92fc-55b5abf6a1ae
10.10.16.171:3260,1 iqn-ec3364d0-231a-4ed3-a611-de85223effc4
10.10.16.179:3260,1 iqn-8abac231-d77d-47b9-ab37-b80da35d4410
10.10.16.176:3260,1 iqn-87a5e9a0-6b0f-4c18-a82d-5373bd8bfad3
10.10.16.172:3260,1 iqn-300cf804-aa47-4322-8ec0-a49c3ca121a3
10.10.16.174:3260,1 iqn-2ad0f967-c952-4d56-b386-e05f4376fdd2
10.10.16.171:3260,1 iqn-c4f1c80f-1a22-42b3-b984-0dee772dd44d
10.10.16.180:3260,1 iqn-4811f3f7-0aac-4fd5-a887-4763862efc88

and 
sdb  8:16   0   1.8T  0 disk 
├─sdb1   8:17   0  1000G  0 part 
├─sdb2   8:18   0   7.9M  0 part 
└─sdb3   8:19   0   500G  0 part 
sdc  8:32   0   1.8T  0 disk 
├─sdc1   8:33   0  1000G  0 part 
├─sdc2   8:34   0   7.9M  0 part 
└─sdc3   8:35   0   500G  0 part 

were leaked on our undercloud node.

Fixing should involve straight forward auditing of all codepaths to
ensure appropriate cleanup.

** Affects: ironic
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: baremetal ironic

** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261644

Title:
  baremetal deploys can leak iscsi sessions

Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  There is a hole in the error handling somewhere - failed deploys /
  daemon restarts can leak sessions:

  iscsiadm -m node
  10.10.16.173:3260,1 iqn-b37386b2-fa29-4836-9ee7-f638dfca1ac9
  10.10.16.177:3260,1 iqn-7b9e5f09-7134-4e7f-92b8-31347456dd9f
  10.10.16.178:3260,1 iqn-5aa23554-913b-448d-be97-caae79b75a1b
  10.10.16.181:3260,1 iqn-a79e34e8-bca6-46e7-8f3c-ae0e6306a13e
  10.10.16.175:3260,1 iqn-627b7f63-8018-46c0-92fc-55b5abf6a1ae
  10.10.16.171:3260,1 iqn-ec3364d0-231a-4ed3-a611-de85223effc4
  10.10.16.179:3260,1 iqn-8abac231-d77d-47b9-ab37-b80da35d4410
  10.10.16.176:3260,1 iqn-87a5e9a0-6b0f-4c18-a82d-5373bd8bfad3
  10.10.16.172:3260,1 iqn-300cf804-aa47-4322-8ec0-a49c3ca121a3
  10.10.16.174:3260,1 iqn-2ad0f967-c952-4d56-b386-e05f4376fdd2
  10.10.16.171:3260,1 iqn-c4f1c80f-1a22-42b3-b984-0dee772dd44d
  10.10.16.180:3260,1 iqn-4811f3f7-0aac-4fd5-a887-4763862efc88

  and 
  sdb  8:16   0   1.8T  0 disk 
  ├─sdb1   8:17   0  1000G  0 part 
  ├─sdb2   8:18   0   7.9M  0 part 
  └─sdb3   8:19   0   500G  0 part 
  sdc  8:32   0   1.8T  0 disk 
  ├─sdc1   8:33   0  1000G  0 part 
  ├─sdc2   8:34   0   7.9M  0 part 
  └─sdc3   8:35   0   500G  0 part 

  were leaked on our undercloud node.

  Fixing should involve straight forward auditing of all codepaths to
  ensure appropriate cleanup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1261644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262075] [NEW] tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescue_unrescue_instance[gate, smoke] failed on postgresql only

2013-12-17 Thread Robert Collins
Public bug reported:

http://logs.openstack.org/20/62520/1/gate/gate-tempest-dsvm-postgres-
full/f3a9033/

2013-12-18 05:47:02,936 Request: GET 
http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd
2013-12-18 05:47:02,936 Request Headers: {'Content-Type': 'application/xml', 
'Accept': 'application/xml', 'X-Auth-Token': ''}
2013-12-18 05:47:03,077 Response Status: 200
2013-12-18 05:47:03,077 Nova request id: 
req-1456f54f-7299-49fc-b3f5-66455360eb67
2013-12-18 05:47:03,077 Response Headers: {'content-length': '2167', 
'content-location': 
u'http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd',
 'date': 'Wed, 18 Dec 2013 05:47:03 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
2013-12-18 05:47:03,078 Response Body: 
http://docs.openstack.org/compute/ext/disk_config/api/v1.1"; 
xmlns:os-extended-volumes="http://docs.openstack.org/compute/ext/extended_volumes/api/v1.1";
 xmlns:OS-EXT-IPS="http://docs.openstack.org/compute/ext/extended_ips/api/v1.1"; 
xmlns:atom="http://www.w3.org/2005/Atom"; 
xmlns:OS-EXT-IPS-MAC="http://docs.openstack.org/compute/ext/extended_ips_mac/api/v1.1";
 xmlns:OS-SRV-USG="http://docs.openstack.org/compute/ext/server_usage/api/v1.1"; 
xmlns:OS-EXT-STS="http://docs.openstack.org/compute/ext/extended_status/api/v1.1";
 
xmlns:OS-EXT-AZ="http://docs.openstack.org/compute/ext/extended_availability_zone/api/v2";
 xmlns="http://docs.openstack.org/compute/api/v1.1"; status="SHUTOFF" 
updated="2013-12-18T05:43:58Z" 
hostId="eeb1a42b0840fab07838a1e353499b1a9c944d2197844cf6814eddad" 
name="ServerRescueTestXML-instance-tempest-189395512" 
created="2013-12-18T05:43:13Z" userId="a2733398afa247febff6e65b90712351" 
tenantId="0c07b2e10c454d3fa80773627d2dea67" accessIPv4="" accessIPv6="" id
 ="b7e78cb2-a215-4909-8fee-ed0f9cdd11cd" key_name="None" config_drive="" 
OS-SRV-USG:terminated_at="None" OS-SRV-USG:launched_at="2013-12-18 
05:43:47.580612" OS-EXT-STS:vm_state="stopped" OS-EXT-STS:task_state="None" 
OS-EXT-STS:power_state="4" OS-EXT-AZ:availability_zone="nova" 
OS-DCF:diskConfig="MANUAL">http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/images/31de6d39-e307-4c81-9959-413efc2e5fa7";
 rel="bookmark"/>http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/flavors/42"; 
rel="bookmark"/>http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd";
 rel="self"/>http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/serve
 rs/b7e78cb2-a2
2013-12-18 05:47:03,078 Large body (2167) md5 summary: 
d0d26dbee429d4a58d9073037362e4fa
2013-12-18 05:47:04,080 Request: GET 
http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd
2013-12-18 05:47:04,080 Request Headers: {'Content-Type': 'application/xml', 
'Accept': 'application/xml', 'X-Auth-Token': ''}
2013-12-18 05:47:04,158 Response Status: 200
2013-12-18 05:47:04,158 Nova request id: 
req-0a3509d6-9c61-439f-9be5-056de21d60ef
2013-12-18 05:47:04,158 Response Headers: {'content-length': '2167', 
'content-location': 
u'http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd',
 'date': 'Wed, 18 Dec 2013 05:47:04 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
2013-12-18 05:47:04,159 Response Body: 
http://docs.openstack.org/compute/ext/disk_config/api/v1.1"; 
xmlns:os-extended-volumes="http://docs.openstack.org/compute/ext/extended_volumes/api/v1.1";
 xmlns:OS-EXT-IPS="http://docs.openstack.org/compute/ext/extended_ips/api/v1.1"; 
xmlns:atom="http://www.w3.org/2005/Atom"; 
xmlns:OS-EXT-IPS-MAC="http://docs.openstack.org/compute/ext/extended_ips_mac/api/v1.1";
 xmlns:OS-SRV-USG="http://docs.openstack.org/compute/ext/server_usage/api/v1.1"; 
xmlns:OS-EXT-STS="http://docs.openstack.org/compute/ext/extended_status/api/v1.1";
 
xmlns:OS-EXT-AZ="http://docs.openstack.org/compute/ext/extended_availability_zone/api/v2";
 xmlns="http://docs.openstack.org/compute/api/v1.1"; status="SHUTOFF" 
updated="2013-12-18T05:43:58Z" 
hostId="eeb1a42b0840fab07838a1e353499b1a9c944d2197844cf6814eddad" 
name="ServerRescueTestXML-instance-tempest-189395512" 
created="2013-12-18T05:43:13Z" userId="a2733398afa247febff6e65b90712351" 
tenantId="0c07b2e10c454d3fa80773627d2dea67" accessIPv4="" accessIPv6="" id
 ="b7e78cb2-a215-4909-8fee-ed0f9cdd11cd" key_name="None" config_drive="" 
OS-SRV-USG:terminated_at="None" OS-SRV-USG:launched_at="2013-12-18 
05:43:47.580612" OS-EXT-STS:vm_state="stopped" OS-EXT-STS:task_state="None" 
OS-EXT-STS:power_state="4" OS-EXT-AZ:availability_zone="nova" 
OS-DCF:diskConfig="MANUAL">http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/images/31de6d39-e307-4c81-9959-413efc2e5fa7";
 rel="bookmark"/>http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/flavors/42"; 
rel="bookmark"/>http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd";
 rel="sel

[Yahoo-eng-team] [Bug 1263294] [NEW] ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'

2013-12-21 Thread Robert Collins
Public bug reported:

This is due to line 227 of ./cloudinit/config/cc_mounts.py::

short_name = os.path.basename(device)
sys_path = "/sys/block/%s" % short_name

if not os.path.exists(sys_path):
LOG.debug("did not find entry for %s in /sys/block", short_name)
return None

The sys path for /dev/sda1 is /sys/block/sda/sda1.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1263294

Title:
  ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in
  /sys/block'

Status in Init scripts for use on cloud images:
  New

Bug description:
  This is due to line 227 of ./cloudinit/config/cc_mounts.py::

  short_name = os.path.basename(device)
  sys_path = "/sys/block/%s" % short_name

  if not os.path.exists(sys_path):
  LOG.debug("did not find entry for %s in /sys/block", short_name)
  return None

  The sys path for /dev/sda1 is /sys/block/sda/sda1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1263294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270646] Re: PMTUd / DF broken in GRE tunnel configuration

2014-01-19 Thread Robert Collins
pmtud is enabled:
cat /proc/sys/net/ipv4/ip_no_pmtu_disc 
0


** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1270646

Title:
  PMTUd / DF broken in GRE tunnel configuration

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  PMTUd / DF is broken in our GRE tunnel configuration - slow network
  performance with ovs 1.10.2, pulled the MTU on the VM down to 1440 and
  10MB/s from LA to the UK.

  Thats with both the 1.10.2 dkms package from Ubuntu and the 3.11
  kernel datapath

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1270646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254555] Re: tenant does not see network that is routable from tenant-visible network until neutron-server is restarted

2014-01-21 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254555

Title:
  tenant does not see network that is routable from tenant-visible
  network until neutron-server is restarted

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  In TripleO We have a setup script[1] that does this as an admin:

  neutron net-create default-net --shared
  neutron subnet-create --ip_version 4 --allocation-pool 
start=10.0.0.2,end=10.255.255.254 --gateway 10.0.0.1 10.0.0.0/8 
$ID_OF_default_net
  neutron router-create default-router
  neutron router-interface-add default-router $ID_OF_10.0.0.0/8_subnet
  neutron net-create ext-net --router:external=True
  neutron subnet-create ext-net $FLOATING_CIDR --disable-dhcp --alocation-pool 
start=$FLOATING_START,end=$FLOATING_END
  neutron router-gateway-set default-router ext-net

  I would then expect that all users will be able to see ext-net using
  'neutron net-list' and that they will be able to create floating IPs
  on ext-net.

  As of this commit:

  commit c655156b98a0a25568a3745e114a0bae41bc49d1
  Merge: 75ac6c1 c66212c
  Author: Jenkins 
  Date:   Sun Nov 24 10:02:04 2013 +

  Merge "MidoNet: Added support for the admin_state_up flag"

  I see that the ext-net network is not available after I do all of the
  above router/subnet creation. It does become available to tenants as
  soon as I restart neutron-server.

  [1] https://git.openstack.org/cgit/openstack/tripleo-
  incubator/tree/scripts/setup-neutron

  I can reproduce this at will using the TripleO devtest process on real
  hardware. I have not yet reproduced on VMs using the 'devtest'
  workflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271344] [NEW] neutron-dhcp-agent not getting updates after ~24h running

2014-01-21 Thread Robert Collins
Public bug reported:

Hi, last two days on ci-overcloud.tripleo.org, the neutron-dhcp-agent
has stopped updating DHCP entries - new VMs don't get IP addresses until
neutron-dhcp-agent is restarted.

Haven't seen anything obvious in the logs yet, happy to set specific log
levels or whatever to try and debug this.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271344

Title:
  neutron-dhcp-agent not getting updates after ~24h running

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  Hi, last two days on ci-overcloud.tripleo.org, the neutron-dhcp-agent
  has stopped updating DHCP entries - new VMs don't get IP addresses
  until neutron-dhcp-agent is restarted.

  Haven't seen anything obvious in the logs yet, happy to set specific
  log levels or whatever to try and debug this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272623] [NEW] nova refuses to start if there are baremetal instances with no associated node

2014-01-24 Thread Robert Collins
Public bug reported:

This can happen if a deployment is interrupted at just the wrong time.

2014-01-25 06:53:38,781.781 14556 DEBUG nova.compute.manager 
[req-e1958f79-b0c0-4c80-b284-85bb56f1541d None None] [instance: 
e21e6bca-b528-4922-9f59-7a1a6534ec8d] Current state is 1, state in DB is 1. 
_init_instance 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:720
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 346, 
in fire_timers
timer()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
56, in __call__
cb(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
194, in main
result = function(*args, **kwargs)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/service.py",
 line 480, in run_service
service.start()
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/service.py", line 
172, in start
self.manager.init_host()
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 805, in init_host
self._init_instance(context, instance)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 684, in _init_instance
self.driver.plug_vifs(instance, net_info)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 538, in plug_vifs
self._plug_vifs(instance, network_info)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 543, in _plug_vifs
node = _get_baremetal_node_by_instance_uuid(instance['uuid'])
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 85, in _get_baremetal_node_by_instance_uuid
node = db.bm_node_get_by_instance_uuid(ctx, instance_uuid)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/api.py",
 line 101, in bm_node_get_by_instance_uuid
instance_uuid)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py",
 line 112, in wrapper
return f(*args, **kwargs)
  File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/sqlalchemy/api.py",
 line 152, in bm_node_get_by_instance_uuid
raise exception.InstanceNotFound(instance_id=instance_uuid)
InstanceNotFound: Instance 84c6090b-bf42-4c6a-b2ff-afb22b5ff156 could not be 
found.

If there is no allocated node, we can just skip that part of delete.

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: baremetal

** Summary changed:

- nova refuses to delete baremetal instances if there is no associated node
+ nova refuses to start if there are baremetal instances with no associated node

** Description changed:

+ This can happen if a deployment is interrupted at just the wrong time.
+ 
  2014-01-25 06:53:38,781.781 14556 DEBUG nova.compute.manager 
[req-e1958f79-b0c0-4c80-b284-85bb56f1541d None None] [instance: 
e21e6bca-b528-4922-9f59-7a1a6534ec8d] Current state is 1, state in DB is 1. 
_init_instance 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:720
  Traceback (most recent call last):
-   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
346, in fire_timers
- timer()
-   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
56, in __call__
- cb(*args, **kw)
-   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
194, in main
- result = function(*args, **kwargs)
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/service.py",
 line 480, in run_service
- service.start()
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/service.py", line 
172, in start
- self.manager.init_host()
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 805, in init_host
- self._init_instance(context, instance)
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 684, in _init_instance
- self.driver.plug_vifs(instance, net_info)
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 538, in plug_vifs
- self._plug_vifs(instance, network_info)
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 543, in _plug_vifs
- node = _get_baremetal_node_by_instance_uuid(instance['uuid'])
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 85, in _get_baremetal_node_by_instance_uuid
- node = db.bm_node_get_by_instance_uuid(ctx, instance_uuid)
-   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/api.py",
 line 1

[Yahoo-eng-team] [Bug 1272623] Re: nova refuses to start if there are baremetal instances with no associated node

2014-01-24 Thread Robert Collins
We're running a monkeypatch to avoid this at the moment

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272623

Title:
  nova refuses to start if there are baremetal instances with no
  associated node

Status in OpenStack Compute (Nova):
  Triaged
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This can happen if a deployment is interrupted at just the wrong time.

  2014-01-25 06:53:38,781.781 14556 DEBUG nova.compute.manager 
[req-e1958f79-b0c0-4c80-b284-85bb56f1541d None None] [instance: 
e21e6bca-b528-4922-9f59-7a1a6534ec8d] Current state is 1, state in DB is 1. 
_init_instance 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:720
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
346, in fire_timers
  timer()
    File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
56, in __call__
  cb(*args, **kw)
    File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
194, in main
  result = function(*args, **kwargs)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/service.py",
 line 480, in run_service
  service.start()
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/service.py", line 
172, in start
  self.manager.init_host()
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 805, in init_host
  self._init_instance(context, instance)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 684, in _init_instance
  self.driver.plug_vifs(instance, net_info)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 538, in plug_vifs
  self._plug_vifs(instance, network_info)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 543, in _plug_vifs
  node = _get_baremetal_node_by_instance_uuid(instance['uuid'])
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/driver.py",
 line 85, in _get_baremetal_node_by_instance_uuid
  node = db.bm_node_get_by_instance_uuid(ctx, instance_uuid)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/api.py",
 line 101, in bm_node_get_by_instance_uuid
  instance_uuid)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py",
 line 112, in wrapper
  return f(*args, **kwargs)
    File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/baremetal/db/sqlalchemy/api.py",
 line 152, in bm_node_get_by_instance_uuid
  raise exception.InstanceNotFound(instance_id=instance_uuid)
  InstanceNotFound: Instance 84c6090b-bf42-4c6a-b2ff-afb22b5ff156 could not be 
found.

  If there is no allocated node, we can just skip that part of delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272600] Re: object of type 'NoneType' has no len() from neutronclient in get_instance_nwinfo

2014-01-24 Thread Robert Collins
(Tripleo undercloud is down until we address this)

** Summary changed:

- Deleting an instance that is in the REBUILD/rebuild_spawning state results in 
an undeletable instance
+ object of type 'NoneType' has no len() from neutronclient in 
get_instance_nwinfo

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272600

Title:
  object of type 'NoneType' has no len() from neutronclient in
  get_instance_nwinfo

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  To reproduce:

  1. boot an instance
  2. rebuild instance with a new image id
  3. immediately restart nova-compute

  -- instance is now stuck in REBUILD/rebuild_spawning state.

  4. nova delete instance

  -- instanec now says it is ACTIVE, but it cannot be deleted.

  This traceback appears in logs:

  2014-01-25 01:35:22,096.096 27134 ERROR nova.openstack.common.rpc.amqp 
[req-9f5bbb1e-4cab-4f60-9f28-57366b519533 2de96ce5da994575a08447c93bcafd53 
4956c533154c476799c688eda7ed65ab] Exception during message handling
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py",
 line 461, in _process_data
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
**args)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/rpc/dispatcher.py",
 line 172, in dispatch
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 90, in wrapped
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
payload)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 73, in wrapped
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return f(self, context, *args, **kw)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 244, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
pass
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 230, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return function(self, context, *args, **kwargs)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 295, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 272, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 259, i

[Yahoo-eng-team] [Bug 1272600] Re: object of type 'NoneType' has no len() from neutronclient in get_instance_nwinfo

2014-01-25 Thread Robert Collins
Ah - we've deleted the havana aliases. :(

commit ee50436938eb8122185b08681803d07f16dcd3c1

** Changed in: tripleo
   Status: Triaged => Fix Released

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
   Status: Invalid => Triaged

** Changed in: nova
   Importance: Undecided => High

** Summary changed:

- object of type 'NoneType' has no len() from neutronclient in 
get_instance_nwinfo
+ object of type 'NoneType' has no len() from neutronclient in 
get_instance_nwinfo if neutron credentials are missing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272600

Title:
  object of type 'NoneType' has no len() from neutronclient in
  get_instance_nwinfo if neutron credentials are missing

Status in OpenStack Compute (Nova):
  Triaged
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  To reproduce:

  1. boot an instance
  2. rebuild instance with a new image id
  3. immediately restart nova-compute

  -- instance is now stuck in REBUILD/rebuild_spawning state.

  4. nova delete instance

  -- instanec now says it is ACTIVE, but it cannot be deleted.

  This traceback appears in logs:

  2014-01-25 01:35:22,096.096 27134 ERROR nova.openstack.common.rpc.amqp 
[req-9f5bbb1e-4cab-4f60-9f28-57366b519533 2de96ce5da994575a08447c93bcafd53 
4956c533154c476799c688eda7ed65ab] Exception during message handling
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py",
 line 461, in _process_data
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
**args)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/rpc/dispatcher.py",
 line 172, in dispatch
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 90, in wrapped
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
payload)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 73, in wrapped
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return f(self, context, *args, **kw)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 244, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
pass
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 230, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return function(self, context, *args, **kwargs)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 295, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 272, in decorated_function
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
  2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-25 01:35:22,096.096 27134 TRAC

[Yahoo-eng-team] [Bug 1274798] [NEW] nova-compute stops reporting in grenade

2014-01-30 Thread Robert Collins
Public bug reported:

Seen in 
http://logs.openstack.org/39/70239/1/check/check-grenade-dsvm/6f4b3bf/logs/new/screen-n-sch.txt.gz

jog says
15:38 < jog0> lifeless: it looks like it has happend before
15:39 < jog0> message:"has not been heard from in a  while" AND 
filename:"logs/screen-n-sch.txt"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274798

Title:
  nova-compute stops reporting in grenade

Status in OpenStack Compute (Nova):
  New

Bug description:
  Seen in 
  
http://logs.openstack.org/39/70239/1/check/check-grenade-dsvm/6f4b3bf/logs/new/screen-n-sch.txt.gz

  jog says
  15:38 < jog0> lifeless: it looks like it has happend before
  15:39 < jog0> message:"has not been heard from in a  while" AND 
filename:"logs/screen-n-sch.txt"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276268] [NEW] nova compute hang with file injection off, config drive off, neutron networking

2014-02-04 Thread Robert Collins
Public bug reported:

While trying to change file injection to default off
(https://review.openstack.org/#/c/70239/) we observed nova-compute hang
[the log stops hard about 10 minutes before the test run finishes).
Thought to be a bug in the patch, we then reproduced this with the
config setting done purely in devstack, while trying to avoid the
kernel-hang with neutron isolated networks + file injection (not sure of
the number).

a trace of the hung process threads:
http://paste.openstack.org/show/62463/

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: compute network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276268

Title:
  nova compute hang with file injection off, config drive off, neutron
  networking

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  While trying to change file injection to default off
  (https://review.openstack.org/#/c/70239/) we observed nova-compute
  hang [the log stops hard about 10 minutes before the test run
  finishes). Thought to be a bug in the patch, we then reproduced this
  with the config setting done purely in devstack, while trying to avoid
  the kernel-hang with neutron isolated networks + file injection (not
  sure of the number).

  a trace of the hung process threads:
  http://paste.openstack.org/show/62463/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277368] [NEW] incorrect 'quota exceeded' error from neutron v2 api when subnet is out of addresses

2014-02-06 Thread Robert Collins
Public bug reported:

This code:
except neutron_client_exc.NeutronClientException as e:
LOG.exception(_('Neutron error creating port on network %s') %
  network_id, instance=instance)
# NOTE(mriedem): OverQuota in neutron is a 409
if e.status_code == 409:
raise exception.PortLimitExceeded()
raise

in _create_port in the neutronv2/api.py source file is incorrect.

It claims that 409 from Neutron implies OverQuota but in fact it is more
general that that.

Because we don't include the exception text in the error, users cannot
debug the problem and have to ask sysadmins to look at logs.

Neutron actually signals this:
2014-02-07 06:30:04,447.447 11046 TRACE nova.network.neutronv2.api [instance: 
94c56776-2680-429d-b7d6-4e8846dfe832] raise ex
2014-02-07 06:30:04,447.447 11046 TRACE nova.network.neutronv2.api [instance: 
94c56776-2680-429d-b7d6-4e8846dfe832] IpAddressGenerationFailureClient: No more 
IP addresses available on network e85b44c7-1136-4217-954e-cdf0acdddfe1.

in it's error.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: neutron

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277368

Title:
  incorrect 'quota exceeded' error from neutron v2 api when subnet is
  out of addresses

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  This code:
  except neutron_client_exc.NeutronClientException as e:
  LOG.exception(_('Neutron error creating port on network %s') %
network_id, instance=instance)
  # NOTE(mriedem): OverQuota in neutron is a 409
  if e.status_code == 409:
  raise exception.PortLimitExceeded()
  raise

  in _create_port in the neutronv2/api.py source file is incorrect.

  It claims that 409 from Neutron implies OverQuota but in fact it is
  more general that that.

  Because we don't include the exception text in the error, users cannot
  debug the problem and have to ask sysadmins to look at logs.

  Neutron actually signals this:
  2014-02-07 06:30:04,447.447 11046 TRACE nova.network.neutronv2.api [instance: 
94c56776-2680-429d-b7d6-4e8846dfe832] raise ex
  2014-02-07 06:30:04,447.447 11046 TRACE nova.network.neutronv2.api [instance: 
94c56776-2680-429d-b7d6-4e8846dfe832] IpAddressGenerationFailureClient: No more 
IP addresses available on network e85b44c7-1136-4217-954e-cdf0acdddfe1.

  in it's error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278203] [NEW] live migration attempts block device and fs resize

2014-02-09 Thread Robert Collins
Public bug reported:

I noticed this when some qemu-nbd processes were hung, and we had file
injection off. I was like WAT.

Here is a backtrace (I added an exception in the nbd code to find out what was 
calling it):
Traceback (most recent call last):
  File "/lib/python2.7/site-packages/oslo/messaging/_executors/base.py", line 
36, in _dispatch
incoming.reply(self.callback(incoming.ctxt, incoming.message))
  File "/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in __call__
return self._dispatch(endpoint, method, ctxt, args)
  File "/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
92, in _dispatch
result = getattr(endpoint, method)(ctxt, **new_args)
  File "/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
payload)
  File "/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
return f(self, context, *args, **kw)
  File "/lib/python2.7/site-packages/nova/compute/manager.py", line 266, in 
decorated_function
e, sys.exc_info())
  File "/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/lib/python2.7/site-packages/nova/compute/manager.py", line 253, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/lib/python2.7/site-packages/nova/compute/manager.py", line 4169, in 
pre_live_migration
migrate_data)
  File "/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4349, 
in pre_live_migration
disk_info)
  File "/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4446, 
in _create_images_and_backing
size=info['virt_disk_size'])
  File "/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 
180, in cache
*args, **kwargs)
  File "/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 
330, in create_image
copy_qcow2_image(base, self.path, size)
  File "/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 
249, in inner
return f(*args, **kwargs)
  File "/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 
296, in copy_qcow2_image
disk.extend(target, size, use_cow=True)
  File "/lib/python2.7/site-packages/nova/virt/disk/api.py", line 155, in extend
if not is_image_partitionless(image, use_cow):
  File "/lib/python2.7/site-packages/nova/virt/disk/api.py", line 205, in 
is_image_partitionless
fs.setup()
  File "/lib/python2.7/site-packages/nova/virt/disk/vfs/localfs.py", line 82, 
in setup
self.teardown()
  File "/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/lib/python2.7/site-packages/nova/virt/disk/vfs/localfs.py", line 76, 
in setup
if not mount.do_mount():
  File "/lib/python2.7/site-packages/nova/virt/disk/mount/api.py", line 218, in 
do_mount
status = self.get_dev() and self.map_dev() and self.mnt_dev()
  File "/lib/python2.7/site-packages/nova/virt/disk/mount/nbd.py", line 127, in 
get_dev
return self._get_dev_retry_helper()
  File "/lib/python2.7/site-packages/nova/virt/disk/mount/api.py", line 118, in 
_get_dev_retry_helper
device = self._inner_get_dev()
  File "/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 
249, in inner
return f(*args, **kwargs)
  File "/lib/python2.7/site-packages/nova/virt/disk/mount/nbd.py", line 86, in 
_inner_get_dev
device = self._allocate_nbd()
  File "/lib/python2.7/site-packages/nova/virt/disk/mount/nbd.py", line 63, in 
_allocate_nbd
raise Exception("FOAD")
Exception: FOAD

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: libvirt

** Description changed:

  AIUI we consider this a) fragile and b) a security risk,thus the bug
  report.
  
  I noticed this when some qemu-nbd processes were hung, and we had file
  injection off. I was like WAT.
  
  Here is a backtrace (I added an exception in the nbd code to find out
  what was calling it):
  
- 2014-02-09 23:19:04.417 4295 ERROR oslo.messaging._executors.base [-] 
Exception during message handling
- 2014-02-09 23:19:04.417 4295 TRACE oslo.messaging._executors.base Traceback 
(most recent call last):
- 2014-02-09 23:19:04.417 4295 TRACE oslo.messaging._executors.base   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_executors/base.py",
 line 36, in _dispatch
- 2014-02-09 23:19:04.417 4295 TRACE oslo.messaging._executors.base 
incoming.reply(self.callback(incoming.ctxt, incoming.message))
- 2014-02-09 23:19:04.417 4295 TRACE oslo.messaging._executors.base   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in __call__
- 2014-02-09 23:19:04.417 4295 TRACE oslo.messaging._e

[Yahoo-eng-team] [Bug 1235955] Re: testsuite needs upgrading

2014-02-09 Thread Robert Collins
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1235955

Title:
  testsuite needs upgrading

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  testsuite as dfar as I can tell is currently s fiasco, needs upgrading
  to match the change of name from quantum to neutron.

  For;

  archtester nova-2013.1. # nosetests -v
  Failure: ImportError (No module named quantumclient) ... ERROR

  OR

  archtester nova-2013.1.3 # nosetests -v

  yield

  Failure: ImportError (No module named quantumclient.common) ... ERROR
  Failure: ImportError (No module named quantumclient.common) ... ERROR
  Failure: ImportError (No module named quantumclient.common) ... ERROR
  nova.tests.api.ec2.test_faults.TestFaults.test_fault_exception ... ERROR
  nova.tests.api.ec2.test_faults.TestFaults.test_fault_exception_status_int ... 
ERROR
  nova.tests.api.ec2.test_middleware.ExecutorTestCase.test_instance_not_found 
... ERROR
  
nova.tests.api.ec2.test_middleware.ExecutorTestCase.test_instance_not_found_none
 ... ERROR
  nova.tests.api.ec2.test_middleware.ExecutorTestCase.test_snapshot_not_found 
... ERROR
  nova.tests.api.ec2.test_middleware.ExecutorTestCase.test_volume_not_found ... 
ERROR
  nova.tests.api.ec2.test_middleware.LockoutTestCase.test_lockout ... ^X^Z
  [1]+  Stopped nosetests -v

  On 'sed'ing quantumclient.common to neutrom.common you still get an import 
error for 
  (No module named quantum)

  and so forth.  Seeing it doesn't even cite the .py file that is
  attempting to import it, it's a fiasco.

  quantum has been renamed to neutron and afaict the testsuite is still
  utilising quantum

  Please upgrade the entitr test suite unless I'm missing something

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1235955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174154] Re: Persistent storage on nodes is not supported

2014-02-12 Thread Robert Collins
We are now able to preserve ephemeral. Yay.


** Changed in: tripleo
   Status: Triaged => Confirmed

** Changed in: tripleo
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1174154

Title:
  Persistent storage on nodes is not supported

Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  At the moment nova baremetal resets the first disk partition when
  deploying, and even when that is worked around (e.g. using a second
  disk) you still can't guarantee the new image lands on a node that had
  appropriate persistent data (e.g. a swift node).

  One long term plan is to teach the baremetal system about cinder
  volumes and use boot --with-volume to land on the same node. When this
  is done the partition table will be preserved on rebuilds / new
  instances.

  However in the short term we want to use the ephemeral feature and
  make it possible to preserve the partition during update -
  http://lists.openstack.org/pipermail/openstack-
  dev/2013-October/017707.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1174154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279572] Re: DBError in nova-compute with baremetal driver after stats table change

2014-02-12 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

** Changed in: tripleo
 Assignee: (unassigned) => Ben Nemec (bnemec)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279572

Title:
  DBError in nova-compute with baremetal driver after stats table change

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  As of the following commit:
  
https://github.com/openstack/nova/commit/8a7b95dccdbe449d5235868781b30edebd34bacd
  our nova-compute service on the seed node is throwing DBErrors.  If I
  reset my Nova tree to the previous commit and downgrade the database
  to 232 then I am able to use nova successfully again.  With that
  commit all boots fail with a No hosts found message, presumably
  related to the following messages in the nova-compute log:

  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
187, in switch
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup return self.greenlet.switch()
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/eventlet/greenthread.py", 
line 194, in main
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup result = function(*args, **kwargs)
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/service.py",
 line 480, in run_service
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup service.start()
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/service.py", line 193, 
in start
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup self.manager.pre_start_hook()
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py", 
line 835, in pre_start_hook
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py", 
line 5121, in update_available_resource
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup rt.update_available_resource(context)
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/lockutils.py",
 line 249, in inner
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup return f(*args, **kwargs)
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 353, in update_available_resource
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup self._sync_compute_node(context, 
resources)
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 384, in _sync_compute_node
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup self._update(context, resources)
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 456, in _update
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup context, self.compute_node, values)
  Feb 12 23:08:24 localhost nova-compute: 2014-02-12 23:08:23.980 4572 TRACE 
nova.openstack.common.threadgroup   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/conductor/api.py", line 
241, in compute_node_upd

[Yahoo-eng-team] [Bug 1280692] Re: Keystone wont start

2014-02-16 Thread Robert Collins
Ah! I see so our commit 30e803aa56d3da7bcebb8a4c62ad532b760b6378 (in
tripleo-image-elements) copied the default etc as at march, and we
haven't managed to track changes properly since then. ugh.

** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: tripleo
   Importance: Critical => High

** Summary changed:

- Keystone wont start
+ old keystone paste configuration embedded in keystone.conf template

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1280692

Title:
  old keystone paste configuration embedded in keystone.conf template

Status in OpenStack Identity (Keystone):
  Invalid
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Keystone has started crashing during devtest with the error
  CRITICAL keystone [-] ConfigFileNotFound: The Keystone configuration file 
keystone-paste.ini could not be found.

  The timing and code touched by this commit
  https://review.openstack.org/#/c/73621 seems to suggest its relevant

  paste_deploy.config_file now defaults to keystone-paste.ini, in
  keystone we don't have a keystone-paste.ini as we have  paste configs
  in keystone.conf

  A commit to verify reversing this would fix the problem
  
https://review.openstack.org/#/c/73838/1/elements/keystone/os-apply-config/etc/keystone/keystone.conf

  shows ci passing again
  https://jenkins02.openstack.org/job/check-tripleo-seed-precise/359/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1280692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280692] Re: old keystone paste configuration embedded in keystone.conf template

2014-02-16 Thread Robert Collins
** Changed in: tripleo
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1280692

Title:
  old keystone paste configuration embedded in keystone.conf template

Status in OpenStack Identity (Keystone):
  Invalid
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Keystone has started crashing during devtest with the error
  CRITICAL keystone [-] ConfigFileNotFound: The Keystone configuration file 
keystone-paste.ini could not be found.

  The timing and code touched by this commit
  https://review.openstack.org/#/c/73621 seems to suggest its relevant

  paste_deploy.config_file now defaults to keystone-paste.ini, in
  keystone we don't have a keystone-paste.ini as we have  paste configs
  in keystone.conf

  A commit to verify reversing this would fix the problem
  
https://review.openstack.org/#/c/73838/1/elements/keystone/os-apply-config/etc/keystone/keystone.conf

  shows ci passing again
  https://jenkins02.openstack.org/job/check-tripleo-seed-precise/359/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1280692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] [NEW] metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-02-16 Thread Robert Collins
Public bug reported:

So we need a new release - this is fixed in:
commit 02baef46968b816ac544b037297273ff6a4e8e1b

but until a new release is done, anyone running trunk Neutron will have
the metadata agent fail.

And neutron itself is missing a versioned dep on the fixed client (but
obviously that has to wait for the client release to be done)

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: python-neutronclient
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Description changed:

  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b
  
  but until a new release is done, anyone running trunk Neutron will have
  the metadata agent fail.
+ 
+ And neutron itself is missing a versioned dep on the fixed client (but
+ obviously that has to wait for the client release to be done)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in OpenStack Neutron (virtual network service):
  New
Status in Python client library for Neutron:
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281324] [NEW] vm_state ERROR vm undeletable

2014-02-17 Thread Robert Collins
Public bug reported:

We had a neutron failure in our cloud, which lead to a bunch of VM's in
state ERROR.. we've repaired neutron but now we can't delete:

ERROR: Cannot 'forceDelete' while instance is in vm_state error (HTTP 409) 
(Request-ID: 
  req-1c4a88c3-4ea1-45a8-b987-629a69b4af06)

or stop/start
nova stop 01199ed9-b3c3-4ee9-a482-bdfdc7347ce1
ERROR: Instance 01199ed9-b3c3-4ee9-a482-bdfdc7347ce1 in task_state deleting. 
Cannot stop while the instance is in this state. (HTTP 400) (Request-ID: 
req-18d58b8d-b360-4b37-b671-34624a6dade4)
(ci-overcloud)robertc@lifelesshp:~/work$ nova start 
01199ed9-b3c3-4ee9-a482-bdfdc7347ce1
ERROR: Instance 01199ed9-b3c3-4ee9-a482-bdfdc7347ce1 in vm_state error. Cannot 
start while the instance is in this state. (HTTP 400) (Request-ID: 
req-b46c0ee6-8ed8-41c3-b400-72f76429209a)

normal 'delete' doesn't error.. but doesn't delete the VM either.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281324

Title:
  vm_state ERROR vm undeletable

Status in OpenStack Compute (Nova):
  New

Bug description:
  We had a neutron failure in our cloud, which lead to a bunch of VM's
  in state ERROR.. we've repaired neutron but now we can't delete:

  ERROR: Cannot 'forceDelete' while instance is in vm_state error (HTTP 409) 
(Request-ID: 
req-1c4a88c3-4ea1-45a8-b987-629a69b4af06)

  or stop/start
  nova stop 01199ed9-b3c3-4ee9-a482-bdfdc7347ce1
  ERROR: Instance 01199ed9-b3c3-4ee9-a482-bdfdc7347ce1 in task_state deleting. 
Cannot stop while the instance is in this state. (HTTP 400) (Request-ID: 
req-18d58b8d-b360-4b37-b671-34624a6dade4)
  (ci-overcloud)robertc@lifelesshp:~/work$ nova start 
01199ed9-b3c3-4ee9-a482-bdfdc7347ce1
  ERROR: Instance 01199ed9-b3c3-4ee9-a482-bdfdc7347ce1 in vm_state error. 
Cannot start while the instance is in this state. (HTTP 400) (Request-ID: 
req-b46c0ee6-8ed8-41c3-b400-72f76429209a)

  normal 'delete' doesn't error.. but doesn't delete the VM either.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282842] [NEW] default nova+neutron setup cannot handle spawning 20 images concurrently

2014-02-20 Thread Robert Collins
Public bug reported:

This breaks any @scale use of a cloud.

Symptoms include 500 errors from 'nova list' (which causes a heat stack
failure) and errors like 'unknown auth strategy' from neutronclient when
its being called from the nova compute.manager.

Sorry for the many-project-tasks here - its not clear where the bug
lies, nor whether its bad defaults, or code handling errors, or perf
tuning etc.

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: High
 Status: Triaged

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => High

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Description changed:

  This breaks any @scale use of a cloud.
+ 
+ Symptoms include 500 errors from 'nova list' (which causes a heat stack
+ failure) and errors like 'unknown auth strategy' from neutronclient when
+ its being called from the nova compute.manager.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282842

Title:
  default nova+neutron setup cannot handle spawning 20 images
  concurrently

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This breaks any @scale use of a cloud.

  Symptoms include 500 errors from 'nova list' (which causes a heat
  stack failure) and errors like 'unknown auth strategy' from
  neutronclient when its being called from the nova compute.manager.

  Sorry for the many-project-tasks here - its not clear where the bug
  lies, nor whether its bad defaults, or code handling errors, or perf
  tuning etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1282842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282842] Re: default nova+neutron setup cannot handle spawning 20 images concurrently

2014-02-20 Thread Robert Collins
e.g.
| fault| {u'message': u'Connection to neutron 
failed: Maximum attempts reached', u'code': 500, u'created': 
u'2014-02-21T01:13:58Z'} |


** Also affects: keystone
   Importance: Undecided
   Status: New

** Description changed:

  This breaks any @scale use of a cloud.
  
  Symptoms include 500 errors from 'nova list' (which causes a heat stack
  failure) and errors like 'unknown auth strategy' from neutronclient when
  its being called from the nova compute.manager.
+ 
+ Sorry for the many-project-tasks here - its not clear where the bug
+ lies, nor whether its bad defaults, or code handling errors, or perf
+ tuning etc.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282842

Title:
  default nova+neutron setup cannot handle spawning 20 images
  concurrently

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This breaks any @scale use of a cloud.

  Symptoms include 500 errors from 'nova list' (which causes a heat
  stack failure) and errors like 'unknown auth strategy' from
  neutronclient when its being called from the nova compute.manager.

  Sorry for the many-project-tasks here - its not clear where the bug
  lies, nor whether its bad defaults, or code handling errors, or perf
  tuning etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1282842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284424] [NEW] nova quota statistics can be incorrect

2014-02-24 Thread Robert Collins
Public bug reported:

on the ci-overcloud we had a couple of network interruptions to the control 
plane. Subsequent to this, nova is reporting:
 nova boot --image user --flavor m1.small --key-name default 
live-migration-test2 --nic net-id=f69ac547-db64-4e69-ae70-e5233634aff0
ERROR: Quota exceeded for instances: Requested 1, but already used 100 of 100 
instances (HTTP 413) (Request-ID: req-0c96b3bc-dc37-4685-8227-02398b3bea6b)
(ci-overcloud-nodepool)robertc@lifelesshp:~/work$ nova list | wc -l
42

That is - nova thinks there are a 100 instances, but there are only 42.
We haven't done any DB surgery or anything that could cause this, so
methinks we've uncovered a bug.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: High
 Status: Triaged

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284424

Title:
  nova quota statistics can be incorrect

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  on the ci-overcloud we had a couple of network interruptions to the control 
plane. Subsequent to this, nova is reporting:
   nova boot --image user --flavor m1.small --key-name default 
live-migration-test2 --nic net-id=f69ac547-db64-4e69-ae70-e5233634aff0
  ERROR: Quota exceeded for instances: Requested 1, but already used 100 of 100 
instances (HTTP 413) (Request-ID: req-0c96b3bc-dc37-4685-8227-02398b3bea6b)
  (ci-overcloud-nodepool)robertc@lifelesshp:~/work$ nova list | wc -l
  42

  That is - nova thinks there are a 100 instances, but there are only
  42. We haven't done any DB surgery or anything that could cause this,
  so methinks we've uncovered a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284431] [NEW] nova-compute doesn't reconnect properly after control plane outage

2014-02-24 Thread Robert Collins
Public bug reported:

We had to reboot the control node for ci-overcloud, after that, and
ensuring it was online again properly...

+--+-+--+-+---+--
| Binary   | Host| Zone 
| Status  | State | Updated_at   
+--+-+--+-+---+--
| nova-conductor   | ci-overcloud-notcompute0-gxezgcvv4v2q   | internal 
| enabled | down  | 2014-02-24T19:42:26.0
| nova-cert| ci-overcloud-notcompute0-gxezgcvv4v2q   | internal 
| enabled | down  | 2014-02-24T19:42:18.0
| nova-scheduler   | ci-overcloud-notcompute0-gxezgcvv4v2q   | internal 
| enabled | down  | 2014-02-24T19:42:26.0
| nova-consoleauth | ci-overcloud-notcompute0-gxezgcvv4v2q   | internal 
| enabled | down  | 2014-02-24T19:42:18.0
| nova-compute | ci-overcloud-novacompute4-5aywwwqlmtv3  | nova 
| enabled | down  | 2014-02-25T02:07:37.0
| nova-compute | ci-overcloud-novacompute7-mosbehy6ikhz  | nova 
| enabled | down  | 2014-02-25T02:07:44.0
| nova-compute | ci-overcloud-novacompute0-vidddfuaauhw  | nova 
| enabled | down  | 2014-02-25T02:07:36.0
| nova-compute | ci-overcloud-novacompute6-6fnuizd4n4gv  | nova 
| enabled | down  | 2014-02-25T02:07:36.0
| nova-compute | ci-overcloud-novacompute1-4q2dbhdklrkq  | nova 
| enabled | down  | 2014-02-25T02:07:43.0
| nova-compute | ci-overcloud-novacompute5-y27zvc4o5fps  | nova 
| enabled | down  | 2014-02-25T02:07:36.0
| nova-compute | ci-overcloud-novacompute3-sxibwe5v5gpw  | nova 
| enabled | down  | 2014-02-25T02:08:40.0
| nova-compute | ci-overcloud-novacompute8-4qu2kxq4e6pb  | nova 
| enabled | down  | 2014-02-25T02:08:41.0
| nova-compute | ci-overcloud-novacompute2-tvsutghnaofq  | nova 
| enabled | down  | 2014-02-25T02:07:36.0
| nova-compute | ci-overcloud-novacompute9-qt7sqeqcexjh  | nova 
| enabled | down  | 2014-02-25T02:08:45.0
| nova-scheduler   | ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | internal 
| enabled | up| 2014-02-25T03:24:53.0
| nova-conductor   | ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | internal 
| enabled | up| 2014-02-25T03:24:59.0
| nova-consoleauth | ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | internal 
| enabled | up| 2014-02-25T03:24:53.0
| nova-cert| ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | internal 
| enabled | up| 2014-02-25T03:24:51.0
+--+-+--+-+---+--

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: High
 Status: Triaged

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284431

Title:
  nova-compute doesn't reconnect properly after control plane outage

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  We had to reboot the control node for ci-overcloud, after that, and
  ensuring it was online again properly...

  
+--+-+--+-+---+--
  | Binary   | Host| Zone   
  | Status  | State | Updated_at   
  
+--+-+--+-+---+--
  | nova-conductor   | ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:26.0
  | nova-cert| ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:18.0
  | nova-scheduler   | ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:26.0
  | nova-consoleauth | ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:18.0
  | nova-compute | ci-overcloud-novacompute4-5aywwwqlmtv3  | nova   
  | enabled | down  | 2014-02-25T02:07:37.0
  | nova-compute | ci-overcloud-novacompute7-mosbehy6ikhz  | nova   
  | enabled | down  | 2014-02-25T02:07:44.0
  | nova-compute | ci-overcloud-novacompute0-vidddfuaauhw  | nova   
  | enabled | down  | 2014-02-25T02:07:36.0
  | nova-compute | ci-overcloud-novacompute6-6fnuizd4n4gv  | nova   
  | enabled | do

[Yahoo-eng-team] [Bug 1287542] [NEW] Error importing module nova.openstack.common.sslutils: duplicate option: ca_file

2014-03-03 Thread Robert Collins
Public bug reported:

Error importing module nova.openstack.common.sslutils: duplicate option:
ca_file

This is seen in the nova gate - for unrelated patches - it might be a
bad slave I guess, or it might be happening to all  subsequent patches,
or it might be a WTF.

http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkVycm9yIGltcG9ydGluZyBtb2R1bGUgbm92YS5vcGVuc3RhY2suY29tbW9uLnNzbHV0aWxzOiBkdXBsaWNhdGUgb3B0aW9uOiBjYV9maWxlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjkwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTM5MTUyNTE4ODl9
suggest it has only happened once so far.

commit 5188052937219badaa692f67d9f98623c15d1de2
Merge: af626d0 88b7380
Author: Jenkins 
Date:   Tue Mar 4 02:47:02 2014 +

Merge "Sync latest config file generator from oslo-incubator"

Was the latest merge prior to this, but it may be coincidental.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287542

Title:
  Error importing module nova.openstack.common.sslutils: duplicate
  option: ca_file

Status in OpenStack Compute (Nova):
  New

Bug description:
  Error importing module nova.openstack.common.sslutils: duplicate
  option: ca_file

  This is seen in the nova gate - for unrelated patches - it might be a
  bad slave I guess, or it might be happening to all  subsequent
  patches, or it might be a WTF.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkVycm9yIGltcG9ydGluZyBtb2R1bGUgbm92YS5vcGVuc3RhY2suY29tbW9uLnNzbHV0aWxzOiBkdXBsaWNhdGUgb3B0aW9uOiBjYV9maWxlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjkwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTM5MTUyNTE4ODl9
  suggest it has only happened once so far.

  commit 5188052937219badaa692f67d9f98623c15d1de2
  Merge: af626d0 88b7380
  Author: Jenkins 
  Date:   Tue Mar 4 02:47:02 2014 +

  Merge "Sync latest config file generator from oslo-incubator"

  Was the latest merge prior to this, but it may be coincidental.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389971] Re: Job for openstack-nova-compute.service failed.

2015-02-08 Thread Robert Collins
This looks like a cinder log with cinder errors to me. I've retargeted
it onto cinder.

BTW the error looks like something in cinder is registering the option
twice.

** Project changed: nova => cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389971

Title:
  Job for openstack-nova-compute.service failed.

Status in Cinder:
  Incomplete

Bug description:
  On rehl 7 machine : juno openstack installtion

  #systemctl restart openstack-nova-compute.service
  Job for openstack-nova-compute.service failed. See 'systemctl status 
openstack-nova-compute.service' and 'journalctl -xn' for details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1389971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391816] Re: [pci-passthrough] PCI Clear message should be reported when there are no VFs left for allocation

2015-02-09 Thread Robert Collins
** Changed in: nova
   Status: Invalid => Triaged

** Summary changed:

- [pci-passthrough] PCI Clear message should be reported when there are no VFs 
left for allocation
+ [pci-passthrough] nova-scheduler throws a 500 when PCI passthrough is used if 
the PCI passthrough scheduler has not been added

** Description changed:

  When launching an instance with a preconfigured port When there are no
  VFs available for allocation the error message  is not clear
  
  #neutron port-create tenant1-net1 --binding:vnic-type direct
  #nova boot --flavor m1.tiny --image  cirros --nic port-id=  vm100
  
  # nova show vm100
  
  (output truncated)
-   
-   

|
+ 
+   

|
  | fault| {"message": "PCI device request 
({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=66b02b9b-600b-4c46-9f66-38ceb6cc2742,spec=[{physical_network='physnet2'}])],
 'code': 500}equests)s failed", "code": 500, "created": "2014-11-12T10:10:14Z"} 
|
- 
  
  Expected:
  Clear message in the fault entry when issuing 'nova show' or when launching 
the Instance(Better)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391816

Title:
  [pci-passthrough] nova-scheduler throws a 500 when PCI passthrough is
  used if the PCI passthrough scheduler has not been added

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  When launching an instance with a preconfigured port When there are no
  VFs available for allocation the error message  is not clear

  #neutron port-create tenant1-net1 --binding:vnic-type direct
  #nova boot --flavor m1.tiny --image  cirros --nic port-id=  vm100

  # nova show vm100

  (output truncated)

    

|
  | fault| {"message": "PCI device request 
({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=66b02b9b-600b-4c46-9f66-38ceb6cc2742,spec=[{physical_network='physnet2'}])],
 'code': 500}equests)s failed", "code": 500, "created": "2014-11-12T10:10:14Z"} 
|

  Expected:
  Clear message in the fault entry when issuing 'nova show' or when launching 
the Instance(Better)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392058] Re: nova fails attempting to enumerate libvirt disk images

2015-02-09 Thread Robert Collins
*** This bug is a duplicate of bug 1322467 ***
https://bugs.launchpad.net/bugs/1322467

** This bug has been marked a duplicate of bug 1322467
   Resource could not be updated when deleting instances

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392058

Title:
  nova fails attempting to enumerate libvirt disk images

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Nova (Icehouse) appears to be enumerating all available libvirt
  volumes (at least those in the "default" storage pool) during the
  "Auditing locally available compute resources" process. If these
  images are not readable by Nova it results in the following error:

  2014-11-12 12:02:26.013 3009 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: Unexpected error while 
running command.
  Command: env LC_ALL=C LANG=C qemu-img info 
/var/lib/libvirt/images/windows-1.qcow2
  Exit code: 1
  Stdout: u''
  Stderr: u"qemu-img: Could not open '/var/lib/libvirt/images/windows-1.qcow2': 
Permission denied\n"

  It's not clear to me why Nova is even attempting to run qemu-img on
  these files.  They aren't exposed to or usable by Nova in any useful
  fashion.   The filesystem where these images are stored is not
  utilized by nova for storage.  They seem completely unrelated to
  anything Nova cares about.

  These errors prevent Nova from starting up correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394351] Re: deadlock when delete port

2015-02-10 Thread Robert Collins
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394351

Title:
  deadlock when delete port

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  netdemoid=$(neutron net-list | awk '{if($4=="'demo-net'"){print $2;}}')
  subnetdemoid=$(neutron subnet-list | awk '{if($4=="'demo-subnet'"){print 
$2;}}')

  exnetid=$(neutron net-list | awk '{if($4=="'ext-net'"){print $2;}}')
  for i in `seq 1 10`; do
  #boot vm, and create floating ip
  nova boot --image cirros --flavor m1.tiny --nic net-id=$netdemoid 
cirrosdemo${i}
  cirrosdemoid[i]=$(nova list | awk '{if($4=="'cirrosdemo${i}'"){print 
$2;}}')
  output=$(neutron floatingip-create $exnetid)
  echo $output
  floatipid[i]=$(echo "$output" | awk '{if($2=="id"){print $4;}}')
  floatip[i]=$(echo "$output" | awk '{if($2=="floating_ip_address"){print 
$4;}}')a
  done

  # Setup router
  neutron router-gateway-set $routerdemoid $exnetid
  neutron router-interface-add demo-router $subnetdemoid
  #wait for VM to be running
  sleep 30

  for i in `seq 1 10`; do
  cirrosfix=$(nova list | awk '{if($4=="'cirrosdemo${i}'"){print $12;}}')
  cirrosfixip=${cirrosfix#*=}
  output=$(neutron port-list | grep ${cirrosfixip})
  echo $output
  portid=$(echo "$output" | awk '{print $2;}')
  neutron floatingip-associate --fixed-ip-address $cirrosfixip 
${floatipid[i]} $portid
  neutron floatingip-delete ${floatipid[i]}
  nova delete ${cirrosdemoid[i]}
  done

  
  With several tries, I have one instance in ERROR state:
  2014-11-19 19:41:02.670 8659 DEBUG neutron.context 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/context.py:83
  2014-11-19 19:41:02.671 8659 DEBUG neutron.plugins.ml2.rpc 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Device 
498e7a54-22dd-4e5b-a8db-d6bffb8edd25 details requested by agent 
ovs-agent-overcloud-controller0-d5wwhbhhtlmp with host 
overcloud-controller0-d5wwhbhhtlmp get_device_details 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:90
  2014-11-19 19:41:02.707 8659 DEBUG neutron.openstack.common.lockutils 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Got semaphore "db-access" lock 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py:168
  2014-11-19 19:41:04.061 8658 ERROR oslo.messaging.rpc.dispatcher 
[req-4303cd41-c87c-44aa-b78a-549fb914ac9c ] Exception during message handling: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') None None
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py",
 line 220, in report_state
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.create_or_update_agent(context, agent_state)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py",
 line 180, in create_or_update_agent
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher return 
self._create_or_update_agent(context, agent)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py",
 line 174, in _create_or_update_agent
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
greenthread.sleep(0)
  2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/sqlalchem

[Yahoo-eng-team] [Bug 1396854] Re: fail to create an instance with specific ip

2015-02-11 Thread Robert Collins
Fixed in https://review.openstack.org/#/c/149905/

** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
   Status: Fix Released => Fix Committed

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396854

Title:
  fail to create an instance with specific ip

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  When I using below command to create an instance with specific ip, it
  failed.

  nova boot --image cirros-0.3.2-x86_64-uec --flavor m1.nano --nic net-
  id=5b7930ae-ff24-4dcf-a429-e039cb7502dd,v4-fixed-ip=10.0.0.5 test

  My env is latest devstack on fedora20.



  Here is trace log from nova-compute.
  2014-11-27 11:15:09.565 ERROR nova.compute.manager [-] [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Instance failed to spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Traceback (most recent call last):
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2247, in _build_resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] yield resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2117, in _build_and_run_instance
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] instance_type=instance_type)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2634, in spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] admin_pass=admin_password)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3095, in _create_image
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] content=files, extra_md=extra_md, 
network_info=network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/metadata/base.py", line 167, in __init__
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/ec2/ec2utils.py", line 152, in 
get_ip_info_for_instance_from_nw_info
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] fixed_ips = nw_info.fixed_ips()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 450, in _sync_wrapper
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 482, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self[:] = self._gt.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] return self._exit_event.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] current.throw(*self._exc)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] result = function(*args, **kwargs)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1647, in _allocate_network_async
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance

[Yahoo-eng-team] [Bug 1396854] Re: fail to create an instance with specific ip

2015-02-11 Thread Robert Collins
*** This bug is a duplicate of bug 1408529 ***
https://bugs.launchpad.net/bugs/1408529

** This bug has been marked a duplicate of bug 1408529
   nova boot vm with '--nic net-id=, v4-fixed-ip=xxx' failed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396854

Title:
  fail to create an instance with specific ip

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  When I using below command to create an instance with specific ip, it
  failed.

  nova boot --image cirros-0.3.2-x86_64-uec --flavor m1.nano --nic net-
  id=5b7930ae-ff24-4dcf-a429-e039cb7502dd,v4-fixed-ip=10.0.0.5 test

  My env is latest devstack on fedora20.



  Here is trace log from nova-compute.
  2014-11-27 11:15:09.565 ERROR nova.compute.manager [-] [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Instance failed to spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Traceback (most recent call last):
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2247, in _build_resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] yield resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2117, in _build_and_run_instance
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] instance_type=instance_type)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2634, in spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] admin_pass=admin_password)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3095, in _create_image
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] content=files, extra_md=extra_md, 
network_info=network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/metadata/base.py", line 167, in __init__
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/ec2/ec2utils.py", line 152, in 
get_ip_info_for_instance_from_nw_info
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] fixed_ips = nw_info.fixed_ips()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 450, in _sync_wrapper
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 482, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self[:] = self._gt.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] return self._exit_event.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] current.throw(*self._exc)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] result = function(*args, **kwargs)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1647, in _allocate_network_async
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f

[Yahoo-eng-team] [Bug 1398588] Re: volume_attach action registers volume attachment even on failure

2015-02-15 Thread Robert Collins
Closing (its two weeks later and Patrick hasn't reported reproduction).
Please re-open if it is in fact still an issue.

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: cinder
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398588

Title:
  volume_attach action registers volume attachment even on failure

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When attaching volumes to instances, if the volume attachment fails, it is 
still noted as successful by the system in some cases.
  This is the information reflected when requesting the details of a servers 
volume attachments
  http://developer.openstack.org/api-ref-compute-v2-ext.html
  /v2/​{tenant_id}​/servers/​{server_id}​/os-volume_attachments
  Show volume attachment details

  In the example, I have 2 test servers and 1 test volume.
  I attach the volume to test_server1 and it is successful (though please see: 
https://bugs.launchpad.net/cinder/+bug/1398583)
  Next, I try to attach the same volume to test_server2.
  This call fails as expected, but the mountpoint / attachment is still 
registered.

  To demonstrate, I repeat the previous call.  It fails again, but this
  time due to the requested mountpoint being in-use vs. the volume being
  attached.

  I next make a call to list the volume attachments for test_server2.
  It lists volume attachments even though there are none and the Cinder
  api server does not register this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1398588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260102] Re: Addition/Removal of a baremetal node does not propagate to scheduler quickly

2014-03-09 Thread Robert Collins
** Summary changed:

- Removal of a baremetal node does not propagate to scheduler quickly
+ Addition/Removal of a baremetal node does not propagate to scheduler quickly

** Description changed:

- With the baremetal driver, if a baremetal node is deleted, it is not
- removed from pool of available resources until the next run of
+ With the baremetal driver, if a baremetal node is added/deleted, it is
+ not removed from pool of available resources until the next run of
  update_available_resource(). During this window, the scheduler may
- continue to attempt to schedule instances on it, leading to unnecessary
- failures and scheduling retries.
+ continue to attempt to schedule instances on it (when deleted), or
+ report NoValidHosts (when added) leading to unnecessary failures and
+ scheduling retries.
  
  This bug will also apply to the ironic driver, which will rely on the
  same mechanism(s) for listing and scheduling compute resources.

** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260102

Title:
  Addition/Removal of a baremetal node does not propagate to scheduler
  quickly

Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  With the baremetal driver, if a baremetal node is added/deleted, it is
  not removed from pool of available resources until the next run of
  update_available_resource(). During this window, the scheduler may
  continue to attempt to schedule instances on it (when deleted), or
  report NoValidHosts (when added) leading to unnecessary failures and
  scheduling retries.

  This bug will also apply to the ironic driver, which will rely on the
  same mechanism(s) for listing and scheduling compute resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1260102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290540] Re: nova api deprecation warning

2014-03-10 Thread Robert Collins
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290540

Title:
  nova api deprecation warning

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  ARNING nova.network.neutronv2 [req-eb54925d-c466-4069-be4e-
  691e155ea85d None None] Using neutron_admin_tenant_name for
  authentication is deprecated and will be removed in the next release.
  Use neutron_admin_tenant_id instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295703] Re: ci-overcloud job failing "Error while processing VIF ports"

2014-03-25 Thread Robert Collins
** Changed in: tripleo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295703

Title:
  ci-overcloud job failing "Error while processing VIF ports"

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  ci overcloud jobs started failing between 5 and 8 AM GMT

  
  Error from 
http://logs.openstack.org/73/79873/5/check-tripleo/check-tripleo-overcloud-precise/859d4d4/

  var/log/upstart/neutron-openvswitch-agent.log ( on contoller and 1
  compute)

  [-] Error while processing VIF ports
  Traceback (most recent call last): 
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1230, in rpc_loop
  sync = self.process_network_ports(port_info)
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1084, in process_network_ports
  devices_added_updated)
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 984, in treat_devices_added_or_updated
  details['admin_state_up'])
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 893, in treat_vif_port
  physical_network, segmentation_id)
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 593, in port_bound
  physical_network, segmentation_id)
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 459, in provision_local_vlan
  (segmentation_id, ofports))
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py",
 line 190, in mod_flow
  flow_str = _build_flow_expr_str(kwargs, 'mod')
File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py",
 line 546, in _build_flow_expr_str
  raise exceptions.InvalidInput(error_message=msg)
  InvalidInput: Invalid input for operation: Cannot match priority on flow 
deletion or modification.
  2014-03-21 05:20:56.329 7601 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent

  
  merge times and traceback details seem to match up with
  https://review.openstack.org/#/c/58533/19

  currently I'm testing a revert to see if it fixes things

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301117] [NEW] floating ip quota error neutron

2014-04-01 Thread Robert Collins
Public bug reported:

Found with nodepool on the tripleo CI cloud - when the neutron floating
IP quota is exhausted, novaclient add-floating-ip threw a generic 500.

ClientException: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
   req-43db38fd-5b65-40be-960f-d00bf74e23b9)

 /var/log/upstart/nova-api.log:2014-04-02 02:30:03.375 7736 ERROR 
nova.api.openstack [req-43db38fd-5b65-40be-960f-d00bf74e23b9 
  d5af62d2183d431796d74c5bb119ec9f 
e01e473a9250498883955b80966a1e58] Caught error: 409-{u'NeutronError': 
{u'message': u"Quota exceeded for 
  resources: ['floatingip']", u'type': u'OverQuota', u'detail': 
u''}}


was found in our logs - so this should be caught and rethrown appropriately.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api neutron

** Tags added: api neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301117

Title:
  floating ip quota error neutron

Status in OpenStack Compute (Nova):
  New

Bug description:
  Found with nodepool on the tripleo CI cloud - when the neutron
  floating IP quota is exhausted, novaclient add-floating-ip threw a
  generic 500.

  ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
 req-43db38fd-5b65-40be-960f-d00bf74e23b9)

   /var/log/upstart/nova-api.log:2014-04-02 02:30:03.375 7736 ERROR 
nova.api.openstack [req-43db38fd-5b65-40be-960f-d00bf74e23b9 
d5af62d2183d431796d74c5bb119ec9f 
e01e473a9250498883955b80966a1e58] Caught error: 409-{u'NeutronError': 
{u'message': u"Quota exceeded for 
resources: ['floatingip']", u'type': u'OverQuota', 
u'detail': u''}}

  
  was found in our logs - so this should be caught and rethrown appropriately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301128] [NEW] nova floating-ip-list has blank server id field

2014-04-01 Thread Robert Collins
Public bug reported:

See for instance http://paste.ubuntu.com/7192574/

+---+---+-+-+
| Ip| Server Id | Fixed Ip| Pool|
+---+---+-+-+
| 138.35.77.109 |   | -   | ext-net |
| 138.35.77.50  |   | -   | ext-net |
| 138.35.77.36  |   | -   | ext-net |

This appears to be a regression - its breaking nodepool (see bug 130
for context) and we used to have this working before we recently
upgraded our nova.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301128

Title:
  nova floating-ip-list has blank server id field

Status in OpenStack Compute (Nova):
  New

Bug description:
  See for instance http://paste.ubuntu.com/7192574/

  +---+---+-+-+
  | Ip| Server Id | Fixed Ip| Pool|
  +---+---+-+-+
  | 138.35.77.109 |   | -   | ext-net |
  | 138.35.77.50  |   | -   | ext-net |
  | 138.35.77.36  |   | -   | ext-net |

  This appears to be a regression - its breaking nodepool (see bug
  130 for context) and we used to have this working before we
  recently upgraded our nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300693] Re: devtest.sh run leaves 3 default security groups

2014-04-05 Thread Robert Collins
SO, if there is a bug here, it is neutron security-group-list showing
groups for other tenants by default. I'm going to retarget this to
neutron.

** Project changed: tripleo => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300693

Title:
  devtest.sh run leaves 3 default security groups

Status in OpenStack Neutron (virtual network service):
  Incomplete

Bug description:
  
  After a run of devtest.sh. you find that there are 3 default security groups :

  e.g.

  (tempest)root@overcloud-notcompute0-zeb6ebhvusrs:/opt/stack/tempest# neutron 
security-group-list
  +--+-+-+
  | id   | name| description |
  +--+-+-+
  | 1a29a961-4331-4022-87d5-d6432127d750 | default | default |
  | c6e2198b-cc6a-4992-84dd-00fb6753be59 | default | default |
  | fc932556-70fe-4e8c-b708-1588b38907f2 | default | default |
  +--+-+-+

  
  While not a major problem, it does get in the way, and should be tidied up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1300693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-04-29 Thread Robert Collins
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Fix Released
Status in “python-neutronclient” source package in Trusty:
  Fix Released

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263294] Re: ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'

2014-05-08 Thread Robert Collins
Reopening this - once we've configured services to use /mnt, its a
fairly fatal error when we nova rebuild the machine (preserving the
state partition) but said services start up after cloud init but before
our workaround-hack to fix things. E.g. our hack is insufficient.

** Changed in: tripleo
   Status: Fix Released => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1263294

Title:
  ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in
  /sys/block'

Status in Init scripts for use on cloud images:
  Triaged
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This is due to line 227 of ./cloudinit/config/cc_mounts.py::

  short_name = os.path.basename(device)
  sys_path = "/sys/block/%s" % short_name

  if not os.path.exists(sys_path):
  LOG.debug("did not find entry for %s in /sys/block", short_name)
  return None

  The sys path for /dev/sda1 is /sys/block/sda/sda1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1263294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317854] [NEW] nova api error Malformed request URL: URL's project_id '$project_id' doesn't match Context's project_id '$request_token'

2014-05-09 Thread Robert Collins
Public bug reported:

Folk in TripleO are seeing nova list fail with an error about Malformed
url - but the claimed context.project_id has the same context as the
request token novaclient is given by keystone.

sample trace at http://paste.ubuntu.com/7420801/ (nova --debug list)
(and throwaway setup :))

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

** Description changed:

  Folk in TripleO are seeing nova list fail with an error about Malformed
  url - but the claimed context.project_id has the same context as the
  request token novaclient is given by keystone.
+ 
+ sample trace at http://paste.ubuntu.com/7420801/ (nova --debug list)
+ (and throwaway setup :))

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317854

Title:
  nova api error Malformed request URL: URL's project_id '$project_id'
  doesn't match Context's project_id '$request_token'

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  Folk in TripleO are seeing nova list fail with an error about
  Malformed url - but the claimed context.project_id has the same
  context as the request token novaclient is given by keystone.

  sample trace at http://paste.ubuntu.com/7420801/ (nova --debug list)
  (and throwaway setup :))

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1317854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317854] Re: nova api error Malformed request URL: URL's project_id '$project_id' doesn't match Context's project_id '$request_token'

2014-05-09 Thread Robert Collins
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317854

Title:
  nova api error Malformed request URL: URL's project_id '$project_id'
  doesn't match Context's project_id '$request_token'

Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Folk in TripleO are seeing nova list fail with an error about
  Malformed url - but the claimed context.project_id has the same
  context as the request token novaclient is given by keystone.

  sample trace at http://paste.ubuntu.com/7420801/ (nova --debug list)
  (and throwaway setup :))

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1317854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319997] [NEW] object of type 'NoneType' has no len when tokens expire

2014-05-15 Thread Robert Collins
Public bug reported:

When an expired UUID token is used the following is logged in
keystone.log:

   2014-05-15 07:08:01.040 31086 ERROR keystone.common.wsgi [-] object of 
type 'NoneType' has no len()
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 207, in __call__
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 98, in authenticate
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi context, auth)
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 256, in _authenticate_local
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi if len(username) > 
CONF.max_param_size:
2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi TypeError: object of 
type 'NoneType' has no len()

This is fairly noisy for a normal situation.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1319997

Title:
  object of type 'NoneType' has no len when tokens expire

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When an expired UUID token is used the following is logged in
  keystone.log:

 2014-05-15 07:08:01.040 31086 ERROR keystone.common.wsgi [-] object of 
type 'NoneType' has no len()
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 207, in __call__
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 98, in authenticate
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi context, auth)
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/keystone/local/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 256, in _authenticate_local
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi if len(username) 
> CONF.max_param_size:
  2014-05-15 07:08:01.040 31086 TRACE keystone.common.wsgi TypeError: object of 
type 'NoneType' has no len()

  This is fairly noisy for a normal situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1319997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326289] Re: Failing to launch instances : Filter ComputeCapabilitiesFilter returned 0 hosts

2014-06-12 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326289

Title:
  Failing to launch instances : Filter ComputeCapabilitiesFilter
  returned 0 hosts

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Failure started between 1 and 2 AM UTC

  Running nova in debug mode shows the problem

  Jun 04 09:15:55 localhost nova-scheduler[9605]: 2014-06-04 09:15:55.259 9605 
DEBUG nova.filters [req-c37d26da-66de-4658-ba6f-a06a775f1a28 None] Filter 
ComputeFilter returned 1 host(s) get_filtered_objects 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/filters.py:88
  Jun 04 09:15:55 localhost nova-scheduler[9605]: 2014-06-04 09:15:55.259 9605 
DEBUG nova.scheduler.filters.compute_capabilities_filter 
[req-c37d26da-66de-4658-ba6f-a06a775f1a28 None] (seed, 
8f3d2259-ef0b-44fc-a0c4-4d5cc2ef1443) ram:3072 disk:40960 io_ops:0 instances:0 
fails instance_type extra_specs requirements host_passes 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/filters/compute_capabilities_filter.py:72
  Jun 04 09:15:55 localhost nova-scheduler[9605]: 2014-06-04 09:15:55.260 9605 
INFO nova.filters [req-c37d26da-66de-4658-ba6f-a06a775f1a28 None] Filter 
ComputeCapabilitiesFilter returned 0 hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521756] [NEW] race/python3 issue

2015-12-01 Thread Robert Collins
Public bug reported:

Victor asked for me to have a look at an intermittent failure he was
seeing in https://review.openstack.org/#/c/250083/

it shows up like so:
Traceback (most recent call last):
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/eventlet/greenpool.py",
 line 82, in _spawn_n_impl
func(*args, **kwargs)
  File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 737, 
in _upload_and_activate
location_data = self._upload(req, image_meta)
  File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 671, 
in _upload
{'status': 'saving'})
  File "/home/robertc/work/openstack/glance/glance/registry/client/v1/api.py", 
line 174, in update_image_metadata
from_state=from_state)
  File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
209, in update_image
headers=headers)
  File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
141, in do_request
'exc_name': exc_name})
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/oslo_utils/excutils.py",
 line 204, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/six.py",
 line 686, in reraise
raise value
  File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
124, in do_request
**kwargs)
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 71, 
in wrapped
return func(self, *args, **kwargs)
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 375, 
in do_request
headers=copy.deepcopy(headers))
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 88, 
in wrapped
return func(self, method, url, body, headers)
  File "/home/robertc/work/openstack/glance/glance/common/client.py", line 524, 
in _do_request
raise exception.NotFound(res.read())
glance.common.exception.NotFound: b'Image not found'
==
FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_upload_image_http_nonexistent_location_url
tags: worker-0
--
Traceback (most recent call last):
  File "/home/robertc/work/openstack/glance/glance/tests/unit/v1/test_api.py", 
line 1149, in test_upload_image_http_nonexistent_location_url
self.assertEqual(404, res.status_int)
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/testtools/testcase.py",
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 404 != 201


and through bisection I can reproduce it with 75 tests - I'm working on 
shrinking the set but it takes a couple hundred runs to be sure its a false 
branch, so its not super fast.

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "current working set to reproduce"
   
https://bugs.launchpad.net/bugs/1521756/+attachment/4528225/+files/worker-0-l-l-l

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521756

Title:
  race/python3 issue

Status in Glance:
  New

Bug description:
  Victor asked for me to have a look at an intermittent failure he was
  seeing in https://review.openstack.org/#/c/250083/

  it shows up like so:
  Traceback (most recent call last):
File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/eventlet/greenpool.py",
 line 82, in _spawn_n_impl
  func(*args, **kwargs)
File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 
737, in _upload_and_activate
  location_data = self._upload(req, image_meta)
File "/home/robertc/work/openstack/glance/glance/api/v1/images.py", line 
671, in _upload
  {'status': 'saving'})
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/api.py", line 
174, in update_image_metadata
  from_state=from_state)
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
209, in update_image
  headers=headers)
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1/client.py", line 
141, in do_request
  'exc_name': exc_name})
File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/oslo_utils/excutils.py",
 line 204, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/robertc/work/openstack/glance/.tox/py34/lib/python3.4/site-packages/six.py",
 line 686, in reraise
  raise value
File 
"/home/robertc/work/openstack/glance/glance/registry/client/v1