[Yahoo-eng-team] [Bug 1469974] Re: kilo version swift doesn't work showing swiftclient:Authorrization Failure the resource could not be found

2015-06-30 Thread Yao Long
** Description changed:

  # swift --debug list
  DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://swift-test-keystone:35357/v3/tokens
  INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
swift-test-keystone
  DEBUG:urllib3.connectionpool:POST /v3/tokens HTTP/1.1 404 93
  DEBUG:keystoneclient.session:Request returned failure status: 404
  ERROR:swiftclient:Authorization Failure. Authorization Failed: The resource 
could not be found. (HTTP 404) (Request-ID: 
req-93f815b5-7c3b-462e-b3c9-8c0ecf9f4da3)
  Traceback (most recent call last):
-   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1235, 
in _retry
- self.url, self.token = self.get_auth()
-   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1209, 
in get_auth
- insecure=self.insecure)
-   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 406, in 
get_auth
- auth_version=auth_version)
-   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 341, in 
get_auth_keystone
- raise ClientException('Authorization Failure. %s' % err)
+   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1235, 
in _retry
+ self.url, self.token = self.get_auth()
+   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1209, 
in get_auth
+ insecure=self.insecure)
+   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 406, in 
get_auth
+ auth_version=auth_version)
+   File /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 341, in 
get_auth_keystone
+ raise ClientException('Authorization Failure. %s' % err)
  ClientException: Authorization Failure. Authorization Failed: The resource 
could not be found. (HTTP 404) (Request-ID: 
req-93f815b5-7c3b-462e-b3c9-8c0ecf9f4da3)
  Authorization Failure. Authorization Failed: The resource could not be found. 
(HTTP 404) (Request-ID: req-93f815b5-7c3b-462e-b3c9-8c0ecf9f4da3)
  
  Setup of this environment is based on installation guide of kilo
  version. The openstack service on keystone node works just fine but
  swift doesn't.
+ 
+ After then I modified the openstack client enviroment scripts from
+ export OS_AUTH_URL=http://controller:35357/v3
+ to
+ export OS_AUTH_URL=http://controller:35357/v2.0
+ 
+ the debug info is as below
+ 
+ DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://swift-test-keystone:35357/v2.0/tokens
+ INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
swift-test-keystone
+ DEBUG:urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1 200 1213
+ DEBUG:iso8601.iso8601:Parsed 2015-06-30T08:21:48Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'08', 'daydash': u'30', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'48', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'06', 'day': None, 'minute': u'21'} 
with default timezone iso8601.iso8601.Utc object at 0x7f070f511590
+ DEBUG:iso8601.iso8601:Got u'2015' for 'year' with default None
+ DEBUG:iso8601.iso8601:Got u'06' for 'monthdash' with default 1
+ DEBUG:iso8601.iso8601:Got 6 for 'month' with default 6
+ DEBUG:iso8601.iso8601:Got u'30' for 'daydash' with default 1
+ DEBUG:iso8601.iso8601:Got 30 for 'day' with default 30
+ DEBUG:iso8601.iso8601:Got u'08' for 'hour' with default None
+ DEBUG:iso8601.iso8601:Got u'21' for 'minute' with default None
+ DEBUG:iso8601.iso8601:Got u'48' for 'second' with default None
+ INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
swift-test-proxynode1
+ DEBUG:urllib3.connectionpool:GET 
/v1/AUTH_c29f928f72f146fc9411e35c515c00a7?format=json HTTP/1.1 401 131
+ INFO:swiftclient:REQ: curl -i 
http://swift-test-proxynode1:8080/v1/AUTH_c29f928f72f146fc9411e35c515c00a7?format=json
 -X GET -H X-Auth-Token: a09f170245314c3583c477ac36b5508b
+ INFO:swiftclient:RESP STATUS: 401 Unauthorized
+ INFO:swiftclient:RESP HEADERS: [('content-length', '131'), ('connection', 
'keep-alive'), ('x-trans-id', 'tx2c1fe6ed35674405a65c3-005592438c'), ('date', 
'Tue, 30 Jun 2015 07:21:48 GMT'), ('content-type', 'text/html; charset=UTF-8'), 
('www-authenticate', 'Swift realm=AUTH_c29f928f72f146fc9411e35c515c00a7')]
+ INFO:swiftclient:RESP BODY: htmlh1Unauthorized/h1pThis server could 
not verify that you are authorized to access the document you 
requested./p/html
+ DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://swift-test-keystone:35357/v2.0/tokens
+ INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
swift-test-keystone
+ DEBUG:urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1 200 1213
+ DEBUG:iso8601.iso8601:Parsed 2015-06-30T08:21:49Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'08', 'daydash': u'30', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'49', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'06', 'day': None, 'minute': u'21'} 
with default timezone iso8601.iso8601.Utc object at 

[Yahoo-eng-team] [Bug 1470302] [NEW] gate-nova-python27 fails with RuntimeError: maximum recursion depth exceeded

2015-06-30 Thread Tony Breeds
Public bug reported:

Review: https://review.openstack.org/#/c/194325/ failed with $subject

Logstash:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUnVudGltZUVycm9yOiBtYXhpbXVtIHJlY3Vyc2lvbiBkZXB0aCBleGNlZWRlZFwiIEFORCB0YWdzOlwiY29uc29sZVwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yN1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM1NzE1Njg3MDU0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

Possibly related to: https://review.openstack.org/#/c/197176

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470302

Title:
  gate-nova-python27 fails with RuntimeError: maximum recursion depth
  exceeded

Status in OpenStack Compute (Nova):
  New

Bug description:
  Review: https://review.openstack.org/#/c/194325/ failed with $subject

  Logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUnVudGltZUVycm9yOiBtYXhpbXVtIHJlY3Vyc2lvbiBkZXB0aCBleGNlZWRlZFwiIEFORCB0YWdzOlwiY29uc29sZVwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yN1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM1NzE1Njg3MDU0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  Possibly related to: https://review.openstack.org/#/c/197176

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469749] Re: RamFilter logging partially considers ram-allocation-ratio

2015-06-30 Thread Tony Breeds
The log message contains the information required,  The hypervisors has
10148 MB Ram of which 480.4 MB is usable.  The instance requires 2048MB.

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469749

Title:
  RamFilter logging partially considers ram-allocation-ratio

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Package: nova-scheduler
  Version: 1:2014.1.4-0ubuntu2.1

  RamFilter accurately skips a host because RAM resource is not enough
  for requested VM. However, I think log should be more explicit on
  numbers, taking into account ram-allocation-ratio can be different
  from 1.0.

  Log excerpt:
  2015-06-29 12:04:21.422 15708 DEBUG nova.scheduler.filters.ram_filter 
[req-d14d9f04-c2b1-42be-b5b9-669318bb0030 3cca8ee6898e42f287adbd4f5dac1801 
a0ae7f82f577413ab0d73f3dc09fb906] (hostname, hostname.tld) ram:10148 
disk:264192 io_ops:0 instances:39 does not have 2048 MB usable ram, it only has 
480.4 MB usable ram. host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/ram_filter.py:60

  On log above, RAM says 10148 (MB), which seems enough for a 2048MB VM.
  First number (10148) is calculated as: TotalMB - UsedMB. Additional
  (real) number should be: TotalMB * RamAllocRatio - UsedMB.

  In this case, ram-allocatioin-ratio is 0.9, which results in 480.4MB.

  Please let me know if you'd need more details.

  Cheers,
  -Alvaro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1469749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467560] Re: RFE: add instance uuid field to nova.quota_usages table

2015-06-30 Thread Tony Breeds
This is reported against Icehouse which is closed for development.

Please reproduce with with Kilo or liberty-1 and reopen

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467560

Title:
  RFE: add instance uuid field to nova.quota_usages table

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  In Icehouse, the nova.quota_usages table frequently gets out-of-sync
  with the currently active/stopped instances in a tenant/project,
  specifically, there are times when the instance will be set to
  terminated/deleted in the instances table and the quota_usages table
  will retain the data, counting against the tenant's total quota.  As
  far as I can tell there is no way to correlate instances.uuid with the
  records in nova.quota_usages.

  I propose adding an instance uuid column to make future cleanup of
  this table easier.

  I also propose a housecleaning task that does this clean up
  automatically.

  Thanks,
  Dan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470337] [NEW] Some aspects of subnets not validated when using subnet pools

2015-06-30 Thread Carl Baldwin
Public bug reported:

It looks like _validate_subnet is not called when allocating from a
subnet pool.  See here [1] for a discussion about it.

[1]
https://review.openstack.org/#/c/153236/89/neutron/db/db_base_plugin_v2.py

** Affects: neutron
 Importance: Undecided
 Assignee: Ryan Tidwell (ryan-tidwell)
 Status: New


** Tags: l3-ipam-dhcp

** Tags added: l3-dvr-backlog

** Tags removed: l3-dvr-backlog
** Tags added: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) = Ryan Tidwell (ryan-tidwell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470337

Title:
  Some aspects of subnets not validated when using subnet pools

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It looks like _validate_subnet is not called when allocating from a
  subnet pool.  See here [1] for a discussion about it.

  [1]
  https://review.openstack.org/#/c/153236/89/neutron/db/db_base_plugin_v2.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470279] Re: ofagent unittest failes for multiple mock patches

2015-06-30 Thread fumihiko kakuma
** Project changed: neutron = networking-ofagent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470279

Title:
  ofagent unittest failes for multiple mock patches

Status in OpenStack Networking ofagent driver and its agent:
  In Progress

Bug description:
  gate-networking-ofagent-python27 job failes with the following error.
  This error is caused by using multiple patches to same target.
  That was forbidden by https://review.openstack.org/#/c/195881/.

  2015-06-30 06:35:00.771 | {0} 
networking_ofagent.tests.unit.ofagent.test_arp_lib.TestArpLib.test_packet_in_handler_corrupted
 [0.030123s] ... FAILED
  2015-06-30 06:35:00.772 | 
  2015-06-30 06:35:00.772 | Captured traceback:
  2015-06-30 06:35:00.772 | ~~~
  2015-06-30 06:35:00.772 | Traceback (most recent call last):
  2015-06-30 06:35:00.772 |   File 
networking_ofagent/tests/unit/ofagent/test_arp_lib.py, line 311, in 
test_packet_in_handler_corrupted
  2015-06-30 06:35:00.772 | side_effect=ValueError).start()
  2015-06-30 06:35:00.772 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/src/neutron/neutron/tests/base.py,
 line 191, in new_start
  2015-06-30 06:35:00.772 | ''.join(self.first_traceback.get(key, []
  2015-06-30 06:35:00.772 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py,
 line 666, in fail
  2015-06-30 06:35:00.772 | raise self.failureException(msg)
  2015-06-30 06:35:00.772 | AssertionError: mock.patch was setup on an 
already patched target Mod(ryu.lib.packet.packet).Packet. Stop the original 
patch before starting a new one. Traceback of 1st patch:   File 
/usr/lib/python2.7/runpy.py, line 162, in _run_module_as_main
  2015-06-30 06:35:00.773 | __main__, fname, loader, pkg_name)
  2015-06-30 06:35:00.773 |   File /usr/lib/python2.7/runpy.py, line 72, 
in _run_code
  2015-06-30 06:35:00.773 | exec code in run_globals
  2015-06-30 06:35:00.773 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/lib/python2.7/site-packages/subunit/run.py,
 line 149, in module
  2015-06-30 06:35:00.773 | main()

  
  The wole log can be got from the following address.

  http://logs.openstack.org/00/184900/3/check/gate-networking-ofagent-
  python27/18ee891/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1470279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470093] [NEW] The function _get_multipath_iqn get iqn is not complete

2015-06-30 Thread YaoZheng_ZTE
Public bug reported:

1. As for SAN storage has not only one iqn. so, one multipath device will have 
not only one iqn.
2、the function as follow:
def _get_multipath_iqn(self, multipath_device):
entries = self._get_iscsi_devices()
for entry in entries:
entry_real_path = os.path.realpath(/dev/disk/by-path/%s % entry)
entry_multipath = self._get_multipath_device_name(entry_real_path)
if entry_multipath == multipath_device:
return entry.split(iscsi-)[1].split(-lun)[0]
return None
so, if the multipath_device match one device, will return. but return only one 
iqn. 
but the issue is the multipath_device will contain several single device. as 
following:

[root@R4300G2-ctrl02 ~]# ll /dev/disk/by-path/
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
- ../../sds
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
- ../../sdl
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
- ../../sdo
lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
- ../../sdm
so the device have two different 
iqns.(-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00, 
iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470093

Title:
  The function _get_multipath_iqn get iqn is not complete

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. As for SAN storage has not only one iqn. so, one multipath device will 
have not only one iqn.
  2、the function as follow:
  def _get_multipath_iqn(self, multipath_device):
  entries = self._get_iscsi_devices()
  for entry in entries:
  entry_real_path = os.path.realpath(/dev/disk/by-path/%s % entry)
  entry_multipath = self._get_multipath_device_name(entry_real_path)
  if entry_multipath == multipath_device:
  return entry.split(iscsi-)[1].split(-lun)[0]
  return None
  so, if the multipath_device match one device, will return. but return only 
one iqn. 
  but the issue is the multipath_device will contain several single device. as 
following:

  [root@R4300G2-ctrl02 ~]# ll /dev/disk/by-path/
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
- ../../sds
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
- ../../sdl
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
- ../../sdo
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
- ../../sdm
  so the device have two different 
iqns.(-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00, 
iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470094] [NEW] live migration on ML2 Mechanism Event

2015-06-30 Thread a
Public bug reported:

Hi,
ML2 driver: A mechanism driver is called on the creation, update, and deletion 
of networks and ports. For every event, there are two methods that get called - 
one within the database transaction (method suffix of _precommit), one right 
afterwards (method suffix of _postcommit)

When live migrationon TestVM4 on Openstack, I can't see any 
event(create_port_precommit, create_port_postcommit).
I only see update_port_precommit and update_port_postcommit.

I have question: How to do implement ML2 plugin?

2015-06-30 18:25:40.627 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Device 
59571389-024f-4475-b7f5-1156dadd7e4e details requested by agent 
ovs-agent-TestVM4 with host TestVM4 get_device_details 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:60
2015-06-30 18:25:40.644 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Returning: {'profile': {}, 
'admin_state_up': True, 'network_id': u'61eefda2-97df-4e5c-a844-a6aad001e5d9', 
'segmentation_id': 1501L, 'device_owner': u'compute:nova', 'physical_network': 
u'physnet1', 'mac_address': u'fa:16:3e:f6:d1:a4', 'device': 
u'59571389-024f-4475-b7f5-1156dadd7e4e', 'port_id': 
u'59571389-024f-4475-b7f5-1156dadd7e4e', 'fixed_ips': [{'subnet_id': 
u'db6690d1-dbe9-479c-b0dd-14c1ea5d2d43', 'ip_address': u'10.1.1.101'}], 
'network_type': u'vlan'} get_device_details 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:105
2015-06-30 18:25:40.797 2807 DEBUG neutron.context 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/usr/lib/python2.7/dist-packages/neutron/context.py:83
2015-06-30 18:25:40.797 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Device 
59571389-024f-4475-b7f5-1156dadd7e4e up at agent ovs-agent-TestVM4 
update_device_up /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:155
2015-06-30 18:25:40.815 2807 DEBUG neutron.openstack.common.lockutils 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Got semaphore db-access lock 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py:168
2015-06-30 18:25:40.834 2807 INFO neutron.plugins.ml2.drivers.xxx.mechanism_xxx 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] xxx ML2 driver: 
update_port_precommit
2015-06-30 18:25:40.838 2807 INFO neutron.plugins.ml2.drivers.xxx.mechanism_xxx 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] xxx ML2 driver: 
update_port_postcommit
2015-06-30 18:25:40.967 2807 DEBUG neutron.context 
[req-83e77512-6a29-47c7-825b-4cca5b94d823 None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/usr/lib/python2.7/dist-packages/neutron/context.py:83

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Hi,
  ML2 driver: A mechanism driver is called on the creation, update, and 
deletion of networks and ports. For every event, there are two methods that get 
called - one within the database transaction (method suffix of _precommit), one 
right afterwards (method suffix of _postcommit)
  
- When live migrationon TestVM4 on Openstack, I can't see any 
enent(create_port_precommit, create_port_postcommit).
+ When live migrationon TestVM4 on Openstack, I can't see any 
event(create_port_precommit, create_port_postcommit).
  I only see update_port_precommit and update_port_postcommit.
  
- I have question: How to do implmemt ML2 plugin?
- 
+ I have question: How to do implement ML2 plugin?
  
  2015-06-30 18:25:40.627 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Device 
59571389-024f-4475-b7f5-1156dadd7e4e details requested by agent 
ovs-agent-TestVM4 with host TestVM4 get_device_details 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:60
  2015-06-30 18:25:40.644 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Returning: {'profile': {}, 
'admin_state_up': True, 'network_id': u'61eefda2-97df-4e5c-a844-a6aad001e5d9', 
'segmentation_id': 1501L, 'device_owner': u'compute:nova', 'physical_network': 
u'physnet1', 'mac_address': u'fa:16:3e:f6:d1:a4', 'device': 
u'59571389-024f-4475-b7f5-1156dadd7e4e', 'port_id': 
u'59571389-024f-4475-b7f5-1156dadd7e4e', 'fixed_ips': [{'subnet_id': 
u'db6690d1-dbe9-479c-b0dd-14c1ea5d2d43', 'ip_address': u'10.1.1.101'}], 
'network_type': u'vlan'} get_device_details 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:105
  2015-06-30 18:25:40.797 2807 DEBUG neutron.context 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/usr/lib/python2.7/dist-packages/neutron/context.py:83
  2015-06-30 18:25:40.797 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Device 
59571389-024f-4475-b7f5-1156dadd7e4e up at agent 

[Yahoo-eng-team] [Bug 1470087] [NEW] stop glance-api service will raise exception

2015-06-30 Thread YaoZheng_ZTE
Public bug reported:

1. In redhat system,  run systemctl stop  openstack-glance-api.service  will 
stop glance API service .
2. after step 1, the glance api log as follow:
2015-06-18 08:58:47.538 11453 CRITICAL glance [-] OSError: [Errno 38] Function 
not implemented
2015-06-18 08:58:47.538 11453 TRACE glance Traceback (most recent call last):
2015-06-18 08:58:47.538 11453 TRACE glance   File /usr/bin/glance-api, line 
10, in module
2015-06-18 08:58:47.538 11453 TRACE glance sys.exit(main())
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/cmd/api.py, line 90, in main
2015-06-18 08:58:47.538 11453 TRACE glance server.wait()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/common/wsgi.py, line 406, in wait
2015-06-18 08:58:47.538 11453 TRACE glance self.wait_on_children()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/common/wsgi.py, line 345, in 
wait_on_children
2015-06-18 08:58:47.538 11453 TRACE glance pid, status = os.wait()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/green/os.py, line 78, in wait
2015-06-18 08:58:47.538 11453 TRACE glance return waitpid(0, 0)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/green/os.py, line 96, in waitpid
2015-06-18 08:58:47.538 11453 TRACE glance greenthread.sleep(0.01)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 34, in sleep
2015-06-18 08:58:47.538 11453 TRACE glance hub.switch()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 294, in switch
2015-06-18 08:58:47.538 11453 TRACE glance return self.greenlet.switch()
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 346, in run
2015-06-18 08:58:47.538 11453 TRACE glance self.wait(sleep_time)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py, line 82, in wait
2015-06-18 08:58:47.538 11453 TRACE glance sleep(seconds)
2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/common/wsgi.py, line 287, in 
kill_children
2015-06-18 08:58:47.538 11453 TRACE glance os.killpg(self.pgid, 
signal.SIGTERM)
2015-06-18 08:58:47.538 11453 TRACE glance OSError: [Errno 38] Function not 
implemented
2015-06-18 08:58:47.538 11453 TRACE glance 

3.I use icehouse2014.1.3 version, but I review code in K version, this
issue is also present

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1470087

Title:
  stop glance-api service will raise exception

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  1. In redhat system,  run systemctl stop  openstack-glance-api.service  will 
stop glance API service .
  2. after step 1, the glance api log as follow:
  2015-06-18 08:58:47.538 11453 CRITICAL glance [-] OSError: [Errno 38] 
Function not implemented
  2015-06-18 08:58:47.538 11453 TRACE glance Traceback (most recent call last):
  2015-06-18 08:58:47.538 11453 TRACE glance   File /usr/bin/glance-api, line 
10, in module
  2015-06-18 08:58:47.538 11453 TRACE glance sys.exit(main())
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/cmd/api.py, line 90, in main
  2015-06-18 08:58:47.538 11453 TRACE glance server.wait()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/common/wsgi.py, line 406, in wait
  2015-06-18 08:58:47.538 11453 TRACE glance self.wait_on_children()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/glance/common/wsgi.py, line 345, in 
wait_on_children
  2015-06-18 08:58:47.538 11453 TRACE glance pid, status = os.wait()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/green/os.py, line 78, in wait
  2015-06-18 08:58:47.538 11453 TRACE glance return waitpid(0, 0)
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/green/os.py, line 96, in waitpid
  2015-06-18 08:58:47.538 11453 TRACE glance greenthread.sleep(0.01)
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 34, in sleep
  2015-06-18 08:58:47.538 11453 TRACE glance hub.switch()
  2015-06-18 08:58:47.538 11453 TRACE glance   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 294, in switch
  2015-06-18 08:58:47.538 11453 TRACE glance return self.greenlet.switch()
  2015-06-18 08:58:47.538 11453 TRACE glance  

[Yahoo-eng-team] [Bug 1353962] Re: Test job fails with FixedIpLimitExceeded with nova network

2015-06-30 Thread Yaroslav Lobankov
It looks like this bug has nothing to do with Tempest. So moving the
status of the bug to Invalid.

** Changed in: tempest
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353962

Title:
  Test job fails with FixedIpLimitExceeded with nova network

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  VM creation failed due to a `shortage` in fixed IP.

  The fixed range is /24, tempest normally does not keeps up more than
  ~8 VM.

  message: FixedIpLimitExceeded AND filename:logs/screen-n-net.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkZpeGVkSXBMaW1pdEV4Y2VlZGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1uZXQudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc0MTA0MzE3MTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  http://logs.openstack.org/23/112523/1/check/check-tempest-dsvm-
  postgres-
  full/acac6d9/logs/screen-n-cpu.txt.gz#_2014-08-07_09_42_18_481

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449544] Re: Neutron-LB Health monitor association mismatch in horizon and CLI

2015-06-30 Thread senthilmageswaran
The bug is fixed in kilo version .

** Changed in: horizon
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449544

Title:
  Neutron-LB Health monitor association mismatch in horizon and CLI

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When a new pool is created, all the health monitors that are available
  are shown in LB-Pool Information in Horizon Dashboard.

  But in CLI ,

  neutron lb-pool-show pool-id  shows no monitors associated to the newly 
created Pool.
  Please refer LB_HM_default_assoc_UI and LB_HM_default_assoc_CLI.

  Using CLI ,  associate any health monitor to the Pool, correct information 
will be displayed in Horizon Dashboard and CLI.
  So only after creating new pool, Horizon Dashboard lists all the health 
monitors , which is wrong and this needs to be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470108] [NEW] Gate upgrade test juno-kilo fails on oslo.serialization dependency mismatch for keystonemiddleware

2015-06-30 Thread Luigi Toscano
Public bug reported:

The juno-kilo upgrade tests fails during the installation of
keystonemiddleware with a version conflict exception:

2015-06-29 17:18:03.056 | + /usr/local/bin/keystone-manage db_sync
2015-06-29 17:18:03.285 | Traceback (most recent call last):
2015-06-29 17:18:03.285 |   File /usr/local/bin/keystone-manage, line 4, in 
module
2015-06-29 17:18:03.285 | 
__import__('pkg_resources').require('keystone==2015.1.1.dev13')
2015-06-29 17:18:03.285 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3084, 
in module
2015-06-29 17:18:03.286 | @_call_aside
2015-06-29 17:18:03.286 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3070, 
in _call_aside
2015-06-29 17:18:03.286 | f(*args, **kwargs)
2015-06-29 17:18:03.286 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3097, 
in _initialize_master_working_set
2015-06-29 17:18:03.287 | working_set = WorkingSet._build_master()
2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 653, 
in _build_master
2015-06-29 17:18:03.287 | return cls._build_from_requirements(__requires__)
2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 666, 
in _build_from_requirements
2015-06-29 17:18:03.287 | dists = ws.resolve(reqs, Environment())
2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 844, 
in resolve
2015-06-29 17:18:03.287 | raise VersionConflict(dist, 
req).with_context(dependent_req)
2015-06-29 17:18:03.287 | pkg_resources.ContextualVersionConflict: 
(oslo.serialization 1.4.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('oslo.serialization=1.2.0,=1.0.0'), 
set(['python-keystoneclient']))
2015-06-29 17:18:03.297 | + die 61 'DB sync error'
2015-06-29 17:18:03.297 | + local exitcode=1

See for example: https://review.openstack.org/#/c/195657/
http://logs.openstack.org/57/195657/1/gate/gate-grenade-dsvm/0443321/logs/grenade.sh.txt.gz

Maybe it's a version issue in some keystone module, or maybe grenade
does not upgrade the dependencies in the proper order.

This error (currently masked by another set of failures which involve
log storing on the gates) blocks the backport of patches to kilo at
least for horizon.

** Affects: grenade
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: Undecided
 Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470108

Title:
  Gate upgrade test juno-kilo fails on oslo.serialization dependency
  mismatch for keystonemiddleware

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The juno-kilo upgrade tests fails during the installation of
  keystonemiddleware with a version conflict exception:

  2015-06-29 17:18:03.056 | + /usr/local/bin/keystone-manage db_sync
  2015-06-29 17:18:03.285 | Traceback (most recent call last):
  2015-06-29 17:18:03.285 |   File /usr/local/bin/keystone-manage, line 4, in 
module
  2015-06-29 17:18:03.285 | 
__import__('pkg_resources').require('keystone==2015.1.1.dev13')
  2015-06-29 17:18:03.285 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3084, 
in module
  2015-06-29 17:18:03.286 | @_call_aside
  2015-06-29 17:18:03.286 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3070, 
in _call_aside
  2015-06-29 17:18:03.286 | f(*args, **kwargs)
  2015-06-29 17:18:03.286 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3097, 
in _initialize_master_working_set
  2015-06-29 17:18:03.287 | working_set = WorkingSet._build_master()
  2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 653, 
in _build_master
  2015-06-29 17:18:03.287 | return 
cls._build_from_requirements(__requires__)
  2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 666, 
in _build_from_requirements
  2015-06-29 17:18:03.287 | dists = ws.resolve(reqs, Environment())
  2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 844, 
in resolve
  2015-06-29 17:18:03.287 | raise VersionConflict(dist, 
req).with_context(dependent_req)
  2015-06-29 17:18:03.287 | pkg_resources.ContextualVersionConflict: 
(oslo.serialization 1.4.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('oslo.serialization=1.2.0,=1.0.0'), 
set(['python-keystoneclient']))
  2015-06-29 17:18:03.297 | + die 61 'DB sync error'
  2015-06-29 17:18:03.297 | + local exitcode=1

  See for 

[Yahoo-eng-team] [Bug 1363558] Re: check the value of the configuration item retries

2015-06-30 Thread Liusheng
** Changed in: ceilometer
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363558

Title:
  check the value of the configuration item retries

Status in OpenStack Telemetry (Ceilometer):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  we need to to check the value of the configuration item
  block_device_retries in the code in order to ensure the
  block_device_retries   equal or bigger than 1 , done like that the
  configuration item network_allocate_retries

  =
  In ceilometer, there are similar issues, there is no check for value of 
retries
  ceilometer.storage.mongo.utils.ConnectionPool#_mongo_connect
  and:
  ceilometer.ipmi.platform.intel_node_manager.NodeManager#init_node_manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1363558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470051] [NEW] Make security-group-rule-create accept any protocol

2015-06-30 Thread Masaki Matsushita
Public bug reported:

Related Bug: https://bugs.launchpad.net/python-
neutronclient/+bug/1469642

When invalid protocol specified in security-group-rule-create, neutron
server returns protocol values [None, 'tcp', 'udp', 'icmp', 'icmpv6']
are supported as below:

% neutron security-group-rule-create --protocol foo bar
Security group rule protocol foo not supported. Only protocol values [None, 
'tcp', 'udp', 'icmp', 'icmpv6'] and integer representations [0 to 255] are 
supported.

I think None to specify any protocols is confusing and inconsistent
with FWaaS CLI options (it accepts any protocol).

** Affects: neutron
 Importance: Undecided
 Assignee: Masaki Matsushita (mmasaki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470051

Title:
  Make security-group-rule-create accept any protocol

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Related Bug: https://bugs.launchpad.net/python-
  neutronclient/+bug/1469642

  When invalid protocol specified in security-group-rule-create, neutron
  server returns protocol values [None, 'tcp', 'udp', 'icmp', 'icmpv6']
  are supported as below:

  % neutron security-group-rule-create --protocol foo bar
  Security group rule protocol foo not supported. Only protocol values [None, 
'tcp', 'udp', 'icmp', 'icmpv6'] and integer representations [0 to 255] are 
supported.

  I think None to specify any protocols is confusing and inconsistent
  with FWaaS CLI options (it accepts any protocol).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343613] Re: Deadlock found when trying to get lock; try restarting transaction

2015-06-30 Thread Yaroslav Lobankov
It looks like this issue is not related to Tempest. As Matthew said,
tempest runs just triggered it. So moving the bug status to Invalid.

** Changed in: tempest
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343613

Title:
  Deadlock found when trying to get lock; try restarting transaction

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  Example URL:
  
http://logs.openstack.org/31/107131/1/gate/gate-grenade-dsvm/d019d8e/logs/old/screen-n-api.txt.gz?level=ERROR#_2014-07-17_20_59_37_031

  Logstash query(?):
  message:Deadlock found when trying to get lock; try restarting transaction 
AND loglevel:ERROR AND build_status:FAILURE

  32 hits in 48 hours.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470341] [NEW] Cannot remove host from aggregate if host has been deleted

2015-06-30 Thread Qin Zhao
Public bug reported:

Kilo code

Reproduce steps:

1. Assuming that we have one nova-compute node named 'icm' which is
added into one aggregate named 'zhaoqin'

[root@icm ~]# nova aggregate-details zhaoqin
++-+---+---++
| Id | Name| Availability Zone | Hosts | Metadata   |
++-+---+---++
| 1  | zhaoqin | zhaoqin-az| 'icm' | 'availability_zone=zhaoqin-az' |
++-+---+---++
[root@icm ~]# nova service-list
++--+--++-+---++-+
| Id | Binary   | Host | Zone   | Status  | State | Updated_at  
   | Disabled Reason |
++--+--++-+---++-+
| 1  | nova-conductor   | icm  | internal   | enabled | up| 
2015-06-30T14:04:25.828383 | -   |
| 3  | nova-scheduler   | icm  | internal   | enabled | up| 
2015-06-30T14:04:24.525474 | -   |
| 4  | nova-consoleauth | icm  | internal   | enabled | up| 
2015-06-30T14:04:24.640657 | -   |
| 5  | nova-compute | icm  | zhaoqin-az | enabled | up| 
2015-06-30T14:04:19.865857 | -   |
| 6  | nova-cert| icm  | internal   | enabled | up| 
2015-06-30T14:04:25.080046 | -   |
++--+--++-+---++-+


2. Remove the nova-compute using service-delete command. However, the host is 
still in aggregate.

[root@icm ~]# nova service-delete 5
[root@icm ~]# nova service-list
++--+--+--+-+---++-+
| Id | Binary   | Host | Zone | Status  | State | Updated_at
 | Disabled Reason |
++--+--+--+-+---++-+
| 1  | nova-conductor   | icm  | internal | enabled | up| 
2015-06-30T14:05:35.826699 | -   |
| 3  | nova-scheduler   | icm  | internal | enabled | up| 
2015-06-30T14:05:34.524507 | -   |
| 4  | nova-consoleauth | icm  | internal | enabled | up| 
2015-06-30T14:05:34.638234 | -   |
| 6  | nova-cert| icm  | internal | enabled | up| 
2015-06-30T14:05:35.092009 | -   |
++--+--+--+-+---++-+
[root@icm ~]# nova aggregate-details zhaoqin
++-+---+---++
| Id | Name| Availability Zone | Hosts | Metadata   |
++-+---+---++
| 1  | zhaoqin | zhaoqin-az| 'icm' | 'availability_zone=zhaoqin-az' |
++-+---+---++


3. Then, attempt to remove the host from aggregate, but fails. And we can not 
remove this aggregate either, because it is not empty.

[root@icm ~]# nova aggregate-remove-host zhaoqin icm
ERROR (NotFound): Cannot remove host icm in aggregate 1: not found (HTTP 404) 
(Request-ID: req-b5024dbf-156a-44ee-b48e-fc53a331e05d)
[root@icm ~]# nova aggregate-delete zhaoqin
ERROR (BadRequest): Cannot remove host from aggregate 1. Reason: Host aggregate 
is not empty. (HTTP 400) (Request-ID: req-a3c5346c-9a96-49f4-a76d-a7baa768a0ef)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470341

Title:
  Cannot remove host from aggregate if host has been deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  Kilo code

  Reproduce steps:

  1. Assuming that we have one nova-compute node named 'icm' which is
  added into one aggregate named 'zhaoqin'

  [root@icm ~]# nova aggregate-details zhaoqin
  ++-+---+---++
  | Id | Name| Availability Zone | Hosts | Metadata   |
  ++-+---+---++
  | 1  | zhaoqin | zhaoqin-az| 'icm' | 'availability_zone=zhaoqin-az' |
  ++-+---+---++
  [root@icm ~]# nova service-list
  
++--+--++-+---++-+
  | Id | Binary   | Host | Zone   | Status  | State | Updated_at
 | Disabled Reason |
  
++--+--++-+---++-+
  | 1  

[Yahoo-eng-team] [Bug 1470142] [NEW] LuksEncryptor attach volume fails for NFS

2015-06-30 Thread Tom Barron
Public bug reported:

Tempest scenario TestEncryptedCinderVolumes has been silently skipped when run 
with NFS cinder drivers that did not
set the 'encrypted' key in the connection_info['data'] dict in their 
initialize_connection methods.  Change
https://review.openstack.org/#/c/193673/ - which sets the encrypted flag 
generically, in the VolumeManager's
initialize_connection, on the basis of the volume.encryption_key_id value - 
causes this test to actually run its encryption
providers and exposes a problem in LuksEncryptor:attach_volume() for NFS 
exported volumes.

At
https://github.com/openstack/nova/blob/master/nova/volume/encryptors/luks.py#L119
we have:

# modify the original symbolic link to refer to the decrypted device
utils.execute('ln', '--symbolic', '--force',
  '/dev/mapper/%s' % self.dev_name, self.symlink_path,
  run_as_root=True, check_exit_code=True)

but in TestEncryptedCinderVolumes we get the following exception:

2015-06-29 06:44:06.353 DEBUG oslo_concurrency.processutils 
[req-35a458fe-8bfc-4570-ac8e-388e8b74d4ea TestEncryptedCinderVolumes-1523565967 
TestEncryptedCinderVolumes-1577400956] u'sudo nova-rootwrap 
/etc/nova/rootwrap.conf ln --symbolic --force 
/dev/mapper/volume-f5684ecc-959f-4de8-8d62-a8adf4bdb4cc 
/opt/stack/data/nova/mnt/21dd48babac42ae884d1192b8697a041/volume-f5684ecc-959f-4de8-8d62-a8adf4bdb4cc'
 failed. Not Retrying. execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:293
2015-06-29 06:44:06.353 ERROR nova.virt.libvirt.driver 
[req-35a458fe-8bfc-4570-ac8e-388e8b74d4ea TestEncryptedCinderVolumes-1523565967 
TestEncryptedCinderVolumes-1577400956] [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] Failed to attach volume at mountpoint: 
/dev/vdb
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] Traceback (most recent call last):
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 1082, in attach_volume
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] encryptor.attach_volume(context, 
**encryption)
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8]   File 
/opt/stack/new/nova/nova/volume/encryptors/luks.py, line 121, in attach_volume
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] run_as_root=True, 
check_exit_code=True)
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8]   File 
/opt/stack/new/nova/nova/utils.py, line 229, in execute
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] return processutils.execute(*cmd, 
**kwargs)
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8]   File 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py, line 
260, in execute
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] cmd=sanitized_cmd)
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] ProcessExecutionError: Unexpected error 
while running command.
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf ln --symbolic --force 
/dev/mapper/volume-f5684ecc-959f-4de8-8d62-a8adf4bdb4cc 
/opt/stack/data/nova/mnt/21dd48babac42ae884d1192b8697a041/volume-f5684ecc-959f-4de8-8d62-a8adf4bdb4cc
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] Exit code: 99
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] Stdout: u''
2015-06-29 06:44:06.353 13140 ERROR nova.virt.libvirt.driver [instance: 
b285fed7-6d65-4b57-9ab0-8c17ce0cf6a8] Stderr: u'/usr/local/bin/nova-rootwrap: 
Unauthorized command: ln --symbolic --force 
/dev/mapper/volume-f5684ecc-959f-4de8-8d62-a8adf4bdb4cc 
/opt/stack/data/nova/mnt/21dd48babac42ae884d1192b8697a041/volume-f5684ecc-959f-4de8-8d62-a8adf4bdb4cc
 (no filter matched)\n'

The cause is evidently the rootwrap filter at
https://github.com/openstack/nova/blob/master/etc/nova/rootwrap.d/compute.filters#L215,
 namely:

ln: RegExpFilter, ln, root, ln, --symbolic, --force, /dev/mapper/ip
-.*-iscsi-iqn.*, /dev/disk/by-path/ip-.*-iscsi-iqn.*

which only allows for iscsi paths.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: rootwrap

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to 

[Yahoo-eng-team] [Bug 1470153] [NEW] Nova object relationships ignore List objects

2015-06-30 Thread Dan Smith
Public bug reported:

In nova/tests/objects/test_objects.py, we have an important test called
test_relationships(). This ensures that we have version mappings between
objects that depend on each other, and that those versions and
relationships are bumped when one object changes versions.

That test currently excludes any objects that are based on the List
mixin, which obscures dependencies that do things like
Foo-BarList-Bar.

The test needs to be modified to not exclude List-based objects, and the
relationship map needs to be updated for the List objects that are
currently excluded.

** Affects: nova
 Importance: Low
 Assignee: Ryan Rossiter (rlrossit)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470153

Title:
  Nova object relationships ignore List objects

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  In nova/tests/objects/test_objects.py, we have an important test
  called test_relationships(). This ensures that we have version
  mappings between objects that depend on each other, and that those
  versions and relationships are bumped when one object changes
  versions.

  That test currently excludes any objects that are based on the List
  mixin, which obscures dependencies that do things like
  Foo-BarList-Bar.

  The test needs to be modified to not exclude List-based objects, and
  the relationship map needs to be updated for the List objects that are
  currently excluded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470094] Re: live migration on ML2 Mechanism Event

2015-06-30 Thread Assaf Muller
Launchpad bugs are not the place for developer questions. Please hop on
#openstack-neutron on Freenode.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470094

Title:
  live migration on ML2 Mechanism Event

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi,
  ML2 driver: A mechanism driver is called on the creation, update, and 
deletion of networks and ports. For every event, there are two methods that get 
called - one within the database transaction (method suffix of _precommit), one 
right afterwards (method suffix of _postcommit)

  When live migrationon TestVM4 on Openstack, I can't see any 
event(create_port_precommit, create_port_postcommit).
  I only see update_port_precommit and update_port_postcommit.

  I have question: How to do implement ML2 plugin?

  2015-06-30 18:25:40.627 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Device 
59571389-024f-4475-b7f5-1156dadd7e4e details requested by agent 
ovs-agent-TestVM4 with host TestVM4 get_device_details 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:60
  2015-06-30 18:25:40.644 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Returning: {'profile': {}, 
'admin_state_up': True, 'network_id': u'61eefda2-97df-4e5c-a844-a6aad001e5d9', 
'segmentation_id': 1501L, 'device_owner': u'compute:nova', 'physical_network': 
u'physnet1', 'mac_address': u'fa:16:3e:f6:d1:a4', 'device': 
u'59571389-024f-4475-b7f5-1156dadd7e4e', 'port_id': 
u'59571389-024f-4475-b7f5-1156dadd7e4e', 'fixed_ips': [{'subnet_id': 
u'db6690d1-dbe9-479c-b0dd-14c1ea5d2d43', 'ip_address': u'10.1.1.101'}], 
'network_type': u'vlan'} get_device_details 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:105
  2015-06-30 18:25:40.797 2807 DEBUG neutron.context 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/usr/lib/python2.7/dist-packages/neutron/context.py:83
  2015-06-30 18:25:40.797 2807 DEBUG neutron.plugins.ml2.rpc 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Device 
59571389-024f-4475-b7f5-1156dadd7e4e up at agent ovs-agent-TestVM4 
update_device_up /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py:155
  2015-06-30 18:25:40.815 2807 DEBUG neutron.openstack.common.lockutils 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] Got semaphore db-access lock 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py:168
  2015-06-30 18:25:40.834 2807 INFO 
neutron.plugins.ml2.drivers.xxx.mechanism_xxx 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] xxx ML2 driver: 
update_port_precommit
  2015-06-30 18:25:40.838 2807 INFO 
neutron.plugins.ml2.drivers.xxx.mechanism_xxx 
[req-56a57784-0451-4f7d-a0d6-c866e4934bfc None] xxx ML2 driver: 
update_port_postcommit
  2015-06-30 18:25:40.967 2807 DEBUG neutron.context 
[req-83e77512-6a29-47c7-825b-4cca5b94d823 None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/usr/lib/python2.7/dist-packages/neutron/context.py:83

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338844] Re: FixedIpLimitExceeded: Maximum number of fixed ips exceeded in tempest nova-network runs since 7/4

2015-06-30 Thread Yaroslav Lobankov
Brad, thank you for the response! It looks like this bug hasn't been
seen by anyone a long time. Moving the bug status back to Fix
released. Feel free to reopen the bug and set the status back to
''Confirmed'' if you encounter the issue.

** Changed in: tempest
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338844

Title:
  FixedIpLimitExceeded: Maximum number of fixed ips exceeded in
  tempest nova-network runs since 7/4

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQnVpbGRBYm9ydEV4Y2VwdGlvbjogQnVpbGQgb2YgaW5zdGFuY2VcIiBBTkQgbWVzc2FnZTpcImFib3J0ZWQ6IEZhaWxlZCB0byBhbGxvY2F0ZSB0aGUgbmV0d29yayhzKSB3aXRoIGVycm9yIE1heGltdW0gbnVtYmVyIG9mIGZpeGVkIGlwcyBleGNlZWRlZCwgbm90IHJlc2NoZWR1bGluZy5cIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNDc3OTE1MzY1MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Saw it here:

  http://logs.openstack.org/63/98563/5/check/check-tempest-dsvm-
  postgres-full/1472e7b/logs/screen-n-cpu.txt.gz?level=TRACE

  Looks like it's only in jobs using nova-network.

  Started on 7/4, 70 failures in 7 days, check and gate, multiple
  changes.

  Maybe related to https://review.openstack.org/104581.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470154] [NEW] List objects should use obj_relationships

2015-06-30 Thread Dan Smith
Public bug reported:

Nova's List-based objects have something called child_versions, which is
a naive mapping of the objects field and the version relationships
between the list object and the content object. This was created before
we generalized the work in obj_relationships, which normal objects now
use. The list-based objects still use child_versions, which means we
need a separate test and separate developer behaviors when updating
these.

For consistency, we should replace child_versions on all the list
objects with obj_relationships, remove the list-specific test in
test_objects.py, and make sure that the generalized tests properly cover
list objects and relationships between list and non-list objects.

** Affects: nova
 Importance: Low
 Assignee: Ryan Rossiter (rlrossit)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470154

Title:
  List objects should use obj_relationships

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Nova's List-based objects have something called child_versions, which
  is a naive mapping of the objects field and the version relationships
  between the list object and the content object. This was created
  before we generalized the work in obj_relationships, which normal
  objects now use. The list-based objects still use child_versions,
  which means we need a separate test and separate developer behaviors
  when updating these.

  For consistency, we should replace child_versions on all the list
  objects with obj_relationships, remove the list-specific test in
  test_objects.py, and make sure that the generalized tests properly
  cover list objects and relationships between list and non-list
  objects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470164] [NEW] Glance v1 not working with allow_anonymous_access = True

2015-06-30 Thread Alexey Galkin
Public bug reported:

Steps to reproduce:

1. Stop glance-api service.
2. In glance-api.conf  set allow_anonymous_access = True
3. Start glance-api service.
4. Trying to get image-list with use v1 and without keystone x-auth-token

GET /v1/images HTTP/1.1
Host: 172.18.85.25:2081
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:38.0) Gecko/20100101 
Firefox/38.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive

Actual result:

HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Content-Length: 0
Date: Tue, 30 Jun 2015 14:54:42 GMT
Connection: close

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance

** Attachment added: Glance-api log with traceback
   
https://bugs.launchpad.net/bugs/1470164/+attachment/4422528/+files/glance-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1470164

Title:
  Glance v1 not working with allow_anonymous_access = True

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Steps to reproduce:

  1. Stop glance-api service.
  2. In glance-api.conf  set allow_anonymous_access = True
  3. Start glance-api service.
  4. Trying to get image-list with use v1 and without keystone x-auth-token

  GET /v1/images HTTP/1.1
  Host: 172.18.85.25:2081
  User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:38.0) Gecko/20100101 
Firefox/38.0
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
  Accept-Encoding: gzip, deflate
  Connection: keep-alive

  Actual result:

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Tue, 30 Jun 2015 14:54:42 GMT
  Connection: close

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1470164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470179] [NEW] Instance metadata should include project_id

2015-06-30 Thread Andrew Bogott
Public bug reported:

As per

https://www.mail-
archive.com/search?l=openst...@lists.openstack.orgq=subject:%22Re\%3A+\[Openstack\]+How+should+an+instance+learn+what+tenant+it+is+in\%3F%22o=newest

It's weirdly hard for an instance to learn what project it's in.  Let's
just add project_id to instance metadata.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470179

Title:
  Instance metadata should include project_id

Status in OpenStack Compute (Nova):
  New

Bug description:
  As per

  https://www.mail-
  
archive.com/search?l=openst...@lists.openstack.orgq=subject:%22Re\%3A+\[Openstack\]+How+should+an+instance+learn+what+tenant+it+is+in\%3F%22o=newest

  It's weirdly hard for an instance to learn what project it's in.
  Let's just add project_id to instance metadata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470192] [NEW] '501 Not Implemented' responce while send HTTP request with custom method

2015-06-30 Thread Alexey Galkin
Public bug reported:

Steps to reproduce:

1. Send request something like this:

ANY-CUSTOM-METHOD /v2/images HTTP/1.1
Host: localhost:2081
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
X-auth-token: fc6fa22976da45b2af76622935825625
Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive

Expected result:

HTTP/1.1 HTTP 400 - Bad Request

Actual result:

HTTP/1.1 501 Not Implemented
Content-Length: 216
Content-Type: text/html; charset=UTF-8
X-Openstack-Request-Id: req-req-0deadf35-4b66-471b-bb91-ce88344f898c
Date: Tue, 30 Jun 2015 17:47:52 GMT
Connection: keep-alive

html
 head
  title501 Not Implemented/title
 /head
 body
  h1501 Not Implemented/h1
  The server has either erred or is incapable of performing the requested 
operation.br /br /


 /body
/html

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1470192

Title:
  '501 Not Implemented' responce while send HTTP request with custom
  method

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Steps to reproduce:

  1. Send request something like this:

  ANY-CUSTOM-METHOD /v2/images HTTP/1.1
  Host: localhost:2081
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  X-auth-token: fc6fa22976da45b2af76622935825625
  Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
  Accept-Encoding: gzip, deflate
  Connection: keep-alive

  Expected result:

  HTTP/1.1 HTTP 400 - Bad Request

  Actual result:

  HTTP/1.1 501 Not Implemented
  Content-Length: 216
  Content-Type: text/html; charset=UTF-8
  X-Openstack-Request-Id: req-req-0deadf35-4b66-471b-bb91-ce88344f898c
  Date: Tue, 30 Jun 2015 17:47:52 GMT
  Connection: keep-alive

  html
   head
title501 Not Implemented/title
   /head
   body
h1501 Not Implemented/h1
The server has either erred or is incapable of performing the requested 
operation.br /br /


   /body
  /html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1470192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470186] [NEW] Pylint 1.4.1 broken due to new logilab.common 1.0.0 release

2015-06-30 Thread Assaf Muller
Public bug reported:

Pylint 1.4.1 is using logilab-common, which had a release on the 30th,
breaking pylint. Pylint developers are planning a logilab common release
tomorrow which should unbreak pylint once again, at which point I'll re-
enable pylint.

** Affects: neutron
 Importance: Critical
 Assignee: Assaf Muller (amuller)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470186

Title:
  Pylint 1.4.1 broken due to new logilab.common 1.0.0 release

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Pylint 1.4.1 is using logilab-common, which had a release on the 30th,
  breaking pylint. Pylint developers are planning a logilab common
  release tomorrow which should unbreak pylint once again, at which
  point I'll re-enable pylint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470205] [NEW] Keystone IdP SAML metadata insufficient for websso flow

2015-06-30 Thread Miguel Grinberg
Public bug reported:

The metadata generated by Keystone IdP includes a binding of type URI.
From
https://github.com/openstack/keystone/blame/8bb63620b4d9ec71b0a60ed705938103d7d3c2c2/keystone/contrib/federation/idp.py#L490:

def single_sign_on_service():
idp_sso_endpoint = CONF.saml.idp_sso_endpoint
return md.SingleSignOnService(
binding=saml2.BINDING_URI,
location=idp_sso_endpoint)

Looking at the Shibboleth SessionInitiator code, this is not a valid
binding for a default websso configuration. The accepted bindings are
defined at https://github.com/craigpg/shibboleth-
sp2/blob/f62a7996e195a9c026f3f8cb0e9086594b7f8515/shibsp/handler/impl/SAML2SessionInitiator.cpp#L164-L165:

// No override, so we'll install a default binding precedence.
string prec = string(samlconstants::SAML20_BINDING_HTTP_REDIRECT) + 
' ' + samlconstants::SAML20_BINDING_HTTP_POST + ' ' +
samlconstants::SAML20_BINDING_HTTP_POST_SIMPLESIGN + ' ' + 
samlconstants::SAML20_BINDING_HTTP_ARTIFACT;

** Affects: keystone
 Importance: Wishlist
 Assignee: Marek Denis (marek-denis)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1470205

Title:
  Keystone IdP SAML metadata insufficient for websso flow

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The metadata generated by Keystone IdP includes a binding of type URI.
  From
  
https://github.com/openstack/keystone/blame/8bb63620b4d9ec71b0a60ed705938103d7d3c2c2/keystone/contrib/federation/idp.py#L490:

  def single_sign_on_service():
  idp_sso_endpoint = CONF.saml.idp_sso_endpoint
  return md.SingleSignOnService(
  binding=saml2.BINDING_URI,
  location=idp_sso_endpoint)

  Looking at the Shibboleth SessionInitiator code, this is not a valid
  binding for a default websso configuration. The accepted bindings are
  defined at https://github.com/craigpg/shibboleth-
  
sp2/blob/f62a7996e195a9c026f3f8cb0e9086594b7f8515/shibsp/handler/impl/SAML2SessionInitiator.cpp#L164-L165:

  // No override, so we'll install a default binding precedence.
  string prec = string(samlconstants::SAML20_BINDING_HTTP_REDIRECT) 
+ ' ' + samlconstants::SAML20_BINDING_HTTP_POST + ' ' +
  samlconstants::SAML20_BINDING_HTTP_POST_SIMPLESIGN + ' ' + 
samlconstants::SAML20_BINDING_HTTP_ARTIFACT;

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1470205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470219] [NEW] nova show does not return block device mapping

2015-06-30 Thread Tyler North
Public bug reported:

Using nova/stable/icehouse

A nova show instance_id command as well as
nova.servers.get(instance_id) with the Python API does not show the
block device mapping information for the instance.

This would be useful for boot from volume instances to check attributes
such as delete_on_termination, or see which volume attached to an
instance is its root volume.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470219

Title:
  nova show does not return block device mapping

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using nova/stable/icehouse

  A nova show instance_id command as well as
  nova.servers.get(instance_id) with the Python API does not show the
  block device mapping information for the instance.

  This would be useful for boot from volume instances to check
  attributes such as delete_on_termination, or see which volume
  attached to an instance is its root volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470234] [NEW] test_arp_spoof_allowed_address_pairs_0cidr sporadically failing functional job

2015-06-30 Thread Assaf Muller
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVzdF9hcnBfc3Bvb2ZfYWxsb3dlZF9hZGRyZXNzX3BhaXJzXzBjaWRyXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU2OTU1NTk3MTB9

18 hits in last 7 days.

Here's a failure trace of
test_arp_spoof_allowed_address_pairs_0cidr(native):

ft1.78: 
neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_allowed_address_pairs_0cidr(native)_StringException:
 Empty attachments:
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

pythonlogging:'': {{{
2015-06-30 19:36:25,695ERROR [neutron.agent.linux.utils] 
Command: ['ip', 'netns', 'exec', 'func-8f787d60-208d-4524-b4d4-e79b5cd19eae', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
Exit code: 1
Stdin: 
Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.

--- 192.168.0.2 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms


Stderr:
}}}

Traceback (most recent call last):
  File neutron/tests/functional/agent/test_ovs_flows.py, line 169, in 
test_arp_spoof_allowed_address_pairs_0cidr
net_helpers.assert_ping(self.src_namespace, self.dst_addr)
  File neutron/tests/common/net_helpers.py, line 69, in assert_ping
dst_ip])
  File neutron/agent/linux/ip_lib.py, line 676, in execute
extra_ok_codes=extra_ok_codes, **kwargs)
  File neutron/agent/linux/utils.py, line 138, in execute
raise RuntimeError(m)
RuntimeError: 
Command: ['ip', 'netns', 'exec', 'func-8f787d60-208d-4524-b4d4-e79b5cd19eae', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
Exit code: 1
Stdin: 
Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.

--- 192.168.0.2 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests

** Changed in: neutron
   Importance: Medium = High

** Description changed:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVzdF9hcnBfc3Bvb2ZfYWxsb3dlZF9hZGRyZXNzX3BhaXJzXzBjaWRyXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU2OTU1NTk3MTB9
  
  18 hits in last 7 days.
+ 
+ Here's a failure trace of
+ test_arp_spoof_allowed_address_pairs_0cidr(native):
+ 
+ ft1.78: 
neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_allowed_address_pairs_0cidr(native)_StringException:
 Empty attachments:
+   pythonlogging:'neutron.api.extensions'
+   stderr
+   stdout
+ 
+ pythonlogging:'': {{{
+ 2015-06-30 19:36:25,695ERROR [neutron.agent.linux.utils] 
+ Command: ['ip', 'netns', 'exec', 'func-8f787d60-208d-4524-b4d4-e79b5cd19eae', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
+ Exit code: 1
+ Stdin: 
+ Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
+ 
+ --- 192.168.0.2 ping statistics ---
+ 1 packets transmitted, 0 received, 100% packet loss, time 0ms
+ 
+ 
+ Stderr:
+ }}}
+ 
+ Traceback (most recent call last):
+   File neutron/tests/functional/agent/test_ovs_flows.py, line 169, in 
test_arp_spoof_allowed_address_pairs_0cidr
+ net_helpers.assert_ping(self.src_namespace, self.dst_addr)
+   File neutron/tests/common/net_helpers.py, line 69, in assert_ping
+ dst_ip])
+   File neutron/agent/linux/ip_lib.py, line 676, in execute
+ extra_ok_codes=extra_ok_codes, **kwargs)
+   File neutron/agent/linux/utils.py, line 138, in execute
+ raise RuntimeError(m)
+ RuntimeError: 
+ Command: ['ip', 'netns', 'exec', 'func-8f787d60-208d-4524-b4d4-e79b5cd19eae', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
+ Exit code: 1
+ Stdin: 
+ Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
+ 
+ --- 192.168.0.2 ping statistics ---
+ 1 packets transmitted, 0 received, 100% packet loss, time 0ms

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470234

Title:
  test_arp_spoof_allowed_address_pairs_0cidr sporadically failing
  functional job

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVzdF9hcnBfc3Bvb2ZfYWxsb3dlZF9hZGRyZXNzX3BhaXJzXzBjaWRyXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU2OTU1NTk3MTB9

  18 hits in last 7 days.

  Here's a failure trace of
  test_arp_spoof_allowed_address_pairs_0cidr(native):

  ft1.78: 
neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_allowed_address_pairs_0cidr(native)_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{
  2015-06-30 19:36:25,695ERROR 

[Yahoo-eng-team] [Bug 1470225] [NEW] Support deprecated image types

2015-06-30 Thread Andrew Bogott
Public bug reported:

I frequently update the base Trusty images available to my users.  After
I do that, I want to discourage them from creating new servers based on
the old images.

If I remove the old images entirely or make them private, Horizon shows
servers as having type 'unknown.'  I'd like Horizon to support a glance
property of 'deprecated' for an image that should remain visible in
Horizon but not be added to the image pulldown when an instance is
created.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470225

Title:
  Support deprecated image types

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I frequently update the base Trusty images available to my users.
  After I do that, I want to discourage them from creating new servers
  based on the old images.

  If I remove the old images entirely or make them private, Horizon
  shows servers as having type 'unknown.'  I'd like Horizon to support a
  glance property of 'deprecated' for an image that should remain
  visible in Horizon but not be added to the image pulldown when an
  instance is created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1470225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467927] Re: Odd number of vCPUs breaks 'prefer' threads policy

2015-06-30 Thread Stephen Finucane
I was unsure how important the CPU topology exposed to the guest was,
but you're correct in saying that using a best-effort prefer policy
would result in bad scheduler decisions. We still have an 'implicit'
separate policy for odd numbers of cores and an implicit 'prefer' policy
for even numbers, but seeing as we don't really support thread policies
yet then this isn't really a bug.

I will close this bug and keep the above in mind when adding support for
the thread policies.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467927

Title:
  Odd number of vCPUs breaks 'prefer' threads policy

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in
  vCPUs being pinned to pCPUs, per the original blueprint:

  http://specs.openstack.org/openstack/nova-
  specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

  When scheduling instance with this extra spec there appears to be an
  implicit use of the 'prefer' threads policy, i.e. where possible vCPUs
  are pinned to thread siblings first. This is implicit because the
  threads policy aspect of this spec has not yet been implemented.

  However, this implicit 'prefer' policy breaks when a VM with an odd
  number of vCPUs is booted. This has been seen on a Hyper-Threading-
  enabled host where sibling sets are two long, but it would
  presumably happen on any host where the number of siblings (or any
  number between this value and one) is not an factor of the number of
  vCPUs (i.e. vCPUs % n != 0, for siblings = n  0).

  It is reasonable to assume that a three vCPU VM, for example, should
  try best effort and use siblings for at the first two vCPUs of the VM
  (assuming you're on a host system with HyperThreading and sibling sets
  are of length two). This would give us a true best effort
  implementation.

  ---

  # Testing Configuration

  Testing was conducted on a single-node, Fedora 21-based
  (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The
  system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10
  cores * 2 threads = 40 pCPUs. 0-9,20-29 = node0, 10-19,30-39 =
  node1). Two flavors were used:

  openstack flavor create --ram 4096 --disk 20 --vcpus 3 demo.odd
  nova flavor-key demo.odd set hw:cpu_policy=dedicated

  openstack flavor create --ram 4096 --disk 20 --vcpus 4 demo.even
  nova flavor-key demo.even set hw:cpu_policy=dedicated

  # Results

  Correct case (even number of vCPUs)
  =

  The output from 'virsh dumpxml [ID]' for the four vCPU VM is given
  below. Similar results can be seen for varying even numbers of vCPUs
  (2, 4, 10 tested):

  cputune
  shares4096/shares
  vcpupin vcpu='0' cpuset='3'/
  vcpupin vcpu='1' cpuset='23'/
  vcpupin vcpu='2' cpuset='26'/
  vcpupin vcpu='3' cpuset='6'/
  emulatorpin cpuset='3,6,23,26'/
  /cputune

  Incorrect case (odd number of vCPUs)
  ==

  The output from 'virsh dumpxml [ID]' for the three vCPU VM is given
  below. Similar results can be seen for varying odd numbers of vCPUs
  (3, 5 tested):

  cputune
  shares3072/shares
  vcpupin vcpu='0' cpuset='1'/
  vcpupin vcpu='1' cpuset='0'/
  vcpupin vcpu='2' cpuset='25'/
  emulatorpin cpuset='0-1,25'/
  /cputune

  This isn't correct. We would expect something closer to this:

  cputune
  shares3072/shares
  vcpupin vcpu='0' cpuset='0'/
  vcpupin vcpu='1' cpuset='20'/
  vcpupin vcpu='2' cpuset='1'/
  emulatorpin cpuset='0-1,20'/
  /cputune

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470047] [NEW] CLI fails to report an error after creating a snapshot from instance

2015-06-30 Thread Liron Kuchlani
Public bug reported:

Description of problem:
The CLI fails to declare an error and stuck with Server snapshotting... 0 
when a user tries to save a snapshot of an instance while his quota is too 
small. 

Version-Release number of selected component (if applicable):
python-glanceclient-0.17.0-2.el7ost.noarch
python-glance-2015.1.0-6.el7ost.noarch
python-glance-store-0.4.0-1.el7ost.noarch
openstack-glance-2015.1.0-6.el7ost.noarch
openstack-nova-api-2015.1.0-13.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Edit /etc/glance/glance-api.conf set user_storage_quota with low space for 
creating snapshot from instance 
2. openstack-service restart glance
3. Create a snapshot from instance via command line: 'nova image-create 
instanceName snapName --poll'

Actual results:
The CLI fails to declare an error and stuck with Server snapshotting... 0

Expected results:
ERROR should be appeared indicating that quota is too small


Additional info:
log

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: Glance log
   
https://bugs.launchpad.net/bugs/1470047/+attachment/4422326/+files/api.log.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1470047

Title:
  CLI fails to report an error after creating a snapshot from instance

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  The CLI fails to declare an error and stuck with Server snapshotting... 0 
when a user tries to save a snapshot of an instance while his quota is too 
small. 

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.17.0-2.el7ost.noarch
  python-glance-2015.1.0-6.el7ost.noarch
  python-glance-store-0.4.0-1.el7ost.noarch
  openstack-glance-2015.1.0-6.el7ost.noarch
  openstack-nova-api-2015.1.0-13.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Edit /etc/glance/glance-api.conf set user_storage_quota with low space for 
creating snapshot from instance 
  2. openstack-service restart glance
  3. Create a snapshot from instance via command line: 'nova image-create 
instanceName snapName --poll'

  Actual results:
  The CLI fails to declare an error and stuck with Server snapshotting... 0

  Expected results:
  ERROR should be appeared indicating that quota is too small

  
  Additional info:
  log

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1470047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470052] [NEW] vm failed to start which image adaptertype type is scsi

2015-06-30 Thread Dongcan Ye
Public bug reported:

I followed by OpenStack Configuration reference[1], download the cloud image[2]
and convert to vmdk:

$ qemu-img convert -f qcow2 trusty-server-cloudimg-amd64-disk1.img -O
vmdk ubuntu.vmdk

and then use glance image-create command to upload image:

$ glance image-create --name ubuntu-thick-scsi --is-public True --disk-format 
vmdk --container-format bare --property vmware_adaptertype=lsiLogic  \  
   --property vmware_disktype=preallocated --property 
vmware_ostype=ubuntu64Guest  ubuntu.vmdk  
+---+--+  
| Property  | Value|  
+---+--+  
| Property 'vmware_adaptertype' | lsiLogic |  
| Property 'vmware_disktype'| preallocated |  
| Property 'vmware_ostype'  | ubuntu64Guest|  
| checksum  | 676e7fc58d2314db6a264c11804b2d4c |  
| container_format  | bare |  
| created_at| 2015-06-26T23:55:36  |  
| deleted   | False|  
| deleted_at| None |  
| disk_format   | vmdk |  
| id| e79d4815-932b-4be6-b90c-0515f826c615 |  
| is_public | True |  
| min_disk  | 0|  
| min_ram   | 0|  
| name  | ubuntu-thick-scsi|  
| owner | 93a022fd03d94b649d0127498e6149cf |  
| protected | False|  
| size  | 852230144|  
| status| active   |  
| updated_at| 2015-06-26T23:56:39  |  
| virtual_size  | None |  
+---+--+ 

I created an instance in dashboard successful,  But it failed to enter guest 
system.
I doubt the instance does not have a controller to support scsi disk, when 
using ide , instance runs well. 


[1]http://docs.openstack.org/kilo/config-reference/content/vmware.html
[2]http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470052

Title:
  vm failed to start which image adaptertype type is scsi

Status in OpenStack Compute (Nova):
  New

Bug description:
  I followed by OpenStack Configuration reference[1], download the cloud 
image[2]
  and convert to vmdk:

  $ qemu-img convert -f qcow2 trusty-server-cloudimg-amd64-disk1.img -O
  vmdk ubuntu.vmdk

  and then use glance image-create command to upload image:

  $ glance image-create --name ubuntu-thick-scsi --is-public True 
--disk-format vmdk --container-format bare --property 
vmware_adaptertype=lsiLogic  \  
 --property vmware_disktype=preallocated --property 
vmware_ostype=ubuntu64Guest  ubuntu.vmdk  
  +---+--+  
  | Property  | Value|  
  +---+--+  
  | Property 'vmware_adaptertype' | lsiLogic |  
  | Property 'vmware_disktype'| preallocated |  
  | Property 'vmware_ostype'  | ubuntu64Guest|  
  | checksum  | 676e7fc58d2314db6a264c11804b2d4c |  
  | container_format  | bare |  
  | created_at| 2015-06-26T23:55:36  |  
  | deleted   | False|  
  | deleted_at| None |  
  | disk_format   | vmdk |  
  | id| e79d4815-932b-4be6-b90c-0515f826c615 |  
  | is_public | True |  
  | min_disk  | 0|  
  | min_ram   | 0|  
  | name  | ubuntu-thick-scsi|  
  | owner | 93a022fd03d94b649d0127498e6149cf |  
  | protected   

[Yahoo-eng-team] [Bug 1470076] [NEW] Security Group Attributes that are documented as UUIDs require validation

2015-06-30 Thread Sean M. Collins
Public bug reported:

In the API, security_group_id and remote_group_id are documented
as requiring UUIDs.

** Affects: neutron
 Importance: Undecided
 Assignee: Sean M. Collins (scollins)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470076

Title:
  Security Group Attributes that are documented as UUIDs require
  validation

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In the API, security_group_id and remote_group_id are documented
  as requiring UUIDs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469738] Re: The operation of create image has failed with not enough free quota storage although sufficient space was available

2015-06-30 Thread Flavio Percoco
Actually, this is well described in the config option's documentation.
As of today:

cfg.StrOpt('user_storage_quota', default='0',
   help=_(Set a system wide quota for every user. This value is 
  the total capacity that a user can use across 
  all storage systems. A value of 0 means unlimited.
  Optional unit can be specified for the value. Accepted 
  units are B, KB, MB, GB and TB representing 
  Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes 
  respectively. If no unit is specified then Bytes is 
  assumed. Note that there should not be any space 
  between value and unit and units are case sensitive.)),

** Changed in: glance
   Importance: Undecided = Low

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469738

Title:
  The operation of create image has failed with not enough free quota
  storage although sufficient space was available

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Description of problem:
  Creating an image has failed with error message:Denying attempt to upload 
image because it exceeds the quota, although there was enough space for upload 
the image

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.17.0-2.el7ost.noarch
  python-glance-2015.1.0-6.el7ost.noarch
  python-glance-store-0.4.0-1.el7ost.noarch
  openstack-glance-2015.1.0-6.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  With only cirros image(size=12.6MB) run those steps
  1. Edit /etc/glance/glance-api.conf set user_storage_quota =1536000
  2. openstack-service restart glance
  3. Try to upload an image less than 1536000MB

  Actual results:
  Failed with 'Denying attempt to upload image because it exceeds the quota'

  Expected results:
  Creating an image should succeed 

  Additional info:
  It is similar to bug id=1043929
  Glance logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470108] Re: Gate upgrade test juno-kilo fails on oslo.serialization dependency mismatch for keystonemiddleware

2015-06-30 Thread Luigi Toscano
Thanks to Matthew Treinish and Morgan Fainberg. the issue was analyzed
and properly fixed.

The version of oslo.serialization of python-keystoneclient from the stable/kilo 
branch and the one from keystone were not compatible. This was fixed by a new 
version of python-keystoneclient for kilo (1.3.2):
http://lists.openstack.org/pipermail/openstack-announce/2015-June/000406.html

The usage of the new client for kilo fixes the issue; the review which was 
failing is now working (and was merged):
https://review.openstack.org/#/c/195657/
http://logs.openstack.org/57/195657/1/gate/gate-grenade-dsvm/32036b3/logs/grenade.sh.txt.gz


For more details, see the channel logs:
http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2015-06-30.log.html#t2015-06-30T14:25:43

This bug has been properly assigned to python-keystoneclient (and
removed from horizon) for archiving purposes.

** No longer affects: horizon

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** No longer affects: grenade

** Changed in: python-keystoneclient
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470108

Title:
  Gate upgrade test juno-kilo fails on oslo.serialization dependency
  mismatch for keystonemiddleware

Status in Python client library for Keystone:
  Fix Released

Bug description:
  The juno-kilo upgrade tests fails during the installation of
  keystonemiddleware with a version conflict exception:

  2015-06-29 17:18:03.056 | + /usr/local/bin/keystone-manage db_sync
  2015-06-29 17:18:03.285 | Traceback (most recent call last):
  2015-06-29 17:18:03.285 |   File /usr/local/bin/keystone-manage, line 4, in 
module
  2015-06-29 17:18:03.285 | 
__import__('pkg_resources').require('keystone==2015.1.1.dev13')
  2015-06-29 17:18:03.285 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3084, 
in module
  2015-06-29 17:18:03.286 | @_call_aside
  2015-06-29 17:18:03.286 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3070, 
in _call_aside
  2015-06-29 17:18:03.286 | f(*args, **kwargs)
  2015-06-29 17:18:03.286 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 3097, 
in _initialize_master_working_set
  2015-06-29 17:18:03.287 | working_set = WorkingSet._build_master()
  2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 653, 
in _build_master
  2015-06-29 17:18:03.287 | return 
cls._build_from_requirements(__requires__)
  2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 666, 
in _build_from_requirements
  2015-06-29 17:18:03.287 | dists = ws.resolve(reqs, Environment())
  2015-06-29 17:18:03.287 |   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 844, 
in resolve
  2015-06-29 17:18:03.287 | raise VersionConflict(dist, 
req).with_context(dependent_req)
  2015-06-29 17:18:03.287 | pkg_resources.ContextualVersionConflict: 
(oslo.serialization 1.4.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('oslo.serialization=1.2.0,=1.0.0'), 
set(['python-keystoneclient']))
  2015-06-29 17:18:03.297 | + die 61 'DB sync error'
  2015-06-29 17:18:03.297 | + local exitcode=1

  See for example: https://review.openstack.org/#/c/195657/
  
http://logs.openstack.org/57/195657/1/gate/gate-grenade-dsvm/0443321/logs/grenade.sh.txt.gz

  Maybe it's a version issue in some keystone module, or maybe grenade
  does not upgrade the dependencies in the proper order.

  This error (currently masked by another set of failures which involve
  log storing on the gates) blocks the backport of patches to kilo at
  least for horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-keystoneclient/+bug/1470108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470279] [NEW] ofagent unittest failes for multiple mock patches

2015-06-30 Thread fumihiko kakuma
Public bug reported:

gate-networking-ofagent-python27 job failes with the following error.
This error is caused by using multiple patches to same target.
That was forbidden by https://review.openstack.org/#/c/195881/.

2015-06-30 06:35:00.771 | {0} 
networking_ofagent.tests.unit.ofagent.test_arp_lib.TestArpLib.test_packet_in_handler_corrupted
 [0.030123s] ... FAILED
2015-06-30 06:35:00.772 | 
2015-06-30 06:35:00.772 | Captured traceback:
2015-06-30 06:35:00.772 | ~~~
2015-06-30 06:35:00.772 | Traceback (most recent call last):
2015-06-30 06:35:00.772 |   File 
networking_ofagent/tests/unit/ofagent/test_arp_lib.py, line 311, in 
test_packet_in_handler_corrupted
2015-06-30 06:35:00.772 | side_effect=ValueError).start()
2015-06-30 06:35:00.772 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/src/neutron/neutron/tests/base.py,
 line 191, in new_start
2015-06-30 06:35:00.772 | ''.join(self.first_traceback.get(key, []
2015-06-30 06:35:00.772 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py,
 line 666, in fail
2015-06-30 06:35:00.772 | raise self.failureException(msg)
2015-06-30 06:35:00.772 | AssertionError: mock.patch was setup on an 
already patched target Mod(ryu.lib.packet.packet).Packet. Stop the original 
patch before starting a new one. Traceback of 1st patch:   File 
/usr/lib/python2.7/runpy.py, line 162, in _run_module_as_main
2015-06-30 06:35:00.773 | __main__, fname, loader, pkg_name)
2015-06-30 06:35:00.773 |   File /usr/lib/python2.7/runpy.py, line 72, in 
_run_code
2015-06-30 06:35:00.773 | exec code in run_globals
2015-06-30 06:35:00.773 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/lib/python2.7/site-packages/subunit/run.py,
 line 149, in module
2015-06-30 06:35:00.773 | main()


The wole log can be got from the following address.

http://logs.openstack.org/00/184900/3/check/gate-networking-ofagent-
python27/18ee891/console.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470279

Title:
  ofagent unittest failes for multiple mock patches

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  gate-networking-ofagent-python27 job failes with the following error.
  This error is caused by using multiple patches to same target.
  That was forbidden by https://review.openstack.org/#/c/195881/.

  2015-06-30 06:35:00.771 | {0} 
networking_ofagent.tests.unit.ofagent.test_arp_lib.TestArpLib.test_packet_in_handler_corrupted
 [0.030123s] ... FAILED
  2015-06-30 06:35:00.772 | 
  2015-06-30 06:35:00.772 | Captured traceback:
  2015-06-30 06:35:00.772 | ~~~
  2015-06-30 06:35:00.772 | Traceback (most recent call last):
  2015-06-30 06:35:00.772 |   File 
networking_ofagent/tests/unit/ofagent/test_arp_lib.py, line 311, in 
test_packet_in_handler_corrupted
  2015-06-30 06:35:00.772 | side_effect=ValueError).start()
  2015-06-30 06:35:00.772 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/src/neutron/neutron/tests/base.py,
 line 191, in new_start
  2015-06-30 06:35:00.772 | ''.join(self.first_traceback.get(key, []
  2015-06-30 06:35:00.772 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py,
 line 666, in fail
  2015-06-30 06:35:00.772 | raise self.failureException(msg)
  2015-06-30 06:35:00.772 | AssertionError: mock.patch was setup on an 
already patched target Mod(ryu.lib.packet.packet).Packet. Stop the original 
patch before starting a new one. Traceback of 1st patch:   File 
/usr/lib/python2.7/runpy.py, line 162, in _run_module_as_main
  2015-06-30 06:35:00.773 | __main__, fname, loader, pkg_name)
  2015-06-30 06:35:00.773 |   File /usr/lib/python2.7/runpy.py, line 72, 
in _run_code
  2015-06-30 06:35:00.773 | exec code in run_globals
  2015-06-30 06:35:00.773 |   File 
/home/jenkins/workspace/gate-networking-ofagent-python27/.tox/py27/lib/python2.7/site-packages/subunit/run.py,
 line 149, in module
  2015-06-30 06:35:00.773 | main()

  
  The wole log can be got from the following address.

  http://logs.openstack.org/00/184900/3/check/gate-networking-ofagent-
  python27/18ee891/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp