[Yahoo-eng-team] [Bug 1473598] Re: misleading error when creating a group without specifying a domain

2015-07-17 Thread jiaxi
** Changed in: keystone
 Assignee: jiaxi (tjxiter) = (unassigned)

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473598

Title:
  misleading error when creating a group without specifying a domain

Status in Keystone:
  Invalid

Bug description:
  If you try to create a group, but do not include a domain name or id
  in the request, then keystone responds with a 401, making it look like
  you have an authentication problem.

  The correct answer in this case would be a 400 (bad request).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475638] [NEW] There is no option to specify external_gateway_info in neutron router-create command

2015-07-17 Thread Kiran
Public bug reported:

We can specify external_gateway_info in the body of router-create api
but there is no such option in the client

API command

$ curl -v -i -X POST -H X-Auth-Token: $auth_token -H 
Content-type:application/json -d '{router: {external_gateway_info: 
{network_id: e9fc4a13-31a1-419e-9be7-53b8eb99e2f9}, name: 
router--8574156, admin_state_up: false}}' 
http://10.101.1.40:9696/v2.0/routers
* About to connect() to 10.101.1.40 port 9696 (#0)
*   Trying 10.101.1.40...
* Connected to 10.101.1.40 (10.101.1.40) port 9696 (#0)
 POST /v2.0/routers HTTP/1.1
 User-Agent: curl/7.29.0
 Host: 10.101.1.40:9696
 Accept: */*
 X-Auth-Token: 4474c1b7718d4cbd8c138520b308b14f
 Content-type:application/json
 Content-Length: 145
 
* upload completely sent off: 145 out of 145 bytes
 HTTP/1.1 201 Created
HTTP/1.1 201 Created
 Content-Type: application/json; charset=UTF-8
Content-Type: application/json; charset=UTF-8
 Content-Length: 383
Content-Length: 383
 X-Openstack-Request-Id: req-d052a68a-2cb9-481d-b17f-70d050b56b31
X-Openstack-Request-Id: req-d052a68a-2cb9-481d-b17f-70d050b56b31
 Date: Fri, 17 Jul 2015 12:31:35 GMT
Date: Fri, 17 Jul 2015 12:31:35 GMT

 
* Connection #0 to host 10.101.1.40 left intact
{router: {status: ACTIVE, external_gateway_info: {network_id: 
e9fc4a13-31a1-419e-9be7-53b8eb99e2f9, external_fixed_ips: [{subnet_id: 
1dde980c-94c6-4b35-a9a1-5e1e1dc28efc, ip_address: 10.101.1.108}]}, 
name: router--8574156, admin_state_up: false, tenant_id: 
90a112852ba84606976607176c1340dd, routes: [], id: 
efc9f0ae-fd87-4fc6-8431-01cf76962bac}}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475638

Title:
  There is no option to specify external_gateway_info in neutron router-
  create command

Status in neutron:
  New

Bug description:
  We can specify external_gateway_info in the body of router-create api
  but there is no such option in the client

  API command

  $ curl -v -i -X POST -H X-Auth-Token: $auth_token -H 
Content-type:application/json -d '{router: {external_gateway_info: 
{network_id: e9fc4a13-31a1-419e-9be7-53b8eb99e2f9}, name: 
router--8574156, admin_state_up: false}}' 
http://10.101.1.40:9696/v2.0/routers
  * About to connect() to 10.101.1.40 port 9696 (#0)
  *   Trying 10.101.1.40...
  * Connected to 10.101.1.40 (10.101.1.40) port 9696 (#0)
   POST /v2.0/routers HTTP/1.1
   User-Agent: curl/7.29.0
   Host: 10.101.1.40:9696
   Accept: */*
   X-Auth-Token: 4474c1b7718d4cbd8c138520b308b14f
   Content-type:application/json
   Content-Length: 145
   
  * upload completely sent off: 145 out of 145 bytes
   HTTP/1.1 201 Created
  HTTP/1.1 201 Created
   Content-Type: application/json; charset=UTF-8
  Content-Type: application/json; charset=UTF-8
   Content-Length: 383
  Content-Length: 383
   X-Openstack-Request-Id: req-d052a68a-2cb9-481d-b17f-70d050b56b31
  X-Openstack-Request-Id: req-d052a68a-2cb9-481d-b17f-70d050b56b31
   Date: Fri, 17 Jul 2015 12:31:35 GMT
  Date: Fri, 17 Jul 2015 12:31:35 GMT

   
  * Connection #0 to host 10.101.1.40 left intact
  {router: {status: ACTIVE, external_gateway_info: {network_id: 
e9fc4a13-31a1-419e-9be7-53b8eb99e2f9, external_fixed_ips: [{subnet_id: 
1dde980c-94c6-4b35-a9a1-5e1e1dc28efc, ip_address: 10.101.1.108}]}, 
name: router--8574156, admin_state_up: false, tenant_id: 
90a112852ba84606976607176c1340dd, routes: [], id: 
efc9f0ae-fd87-4fc6-8431-01cf76962bac}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475636] [NEW] Add DHCP and DNS log into dhcp agent

2015-07-17 Thread changzhi
Public bug reported:

There is no dnsmasq log info in dhcp agent. I think dnsmasq log info(including 
DNS log and DHCP log) is very useful to developers. So 
 I add a new configuration in dhcp agent. Patch: 
https://review.openstack.org/#/c/202855

** Affects: neutron
 Importance: Undecided
 Assignee: changzhi (changzhi)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475636

Title:
  Add DHCP and DNS log into dhcp agent

Status in neutron:
  In Progress

Bug description:
  There is no dnsmasq log info in dhcp agent. I think dnsmasq log 
info(including DNS log and DHCP log) is very useful to developers. So 
   I add a new configuration in dhcp agent. Patch: 
https://review.openstack.org/#/c/202855

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475652] [NEW] libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

2015-07-17 Thread raphael.glon
Public bug reported:

Reproduced on juno version (actually tested on a fork from 2014.2.3,
sorry in advance if invalid but i think the legacy version is also
concerned by it)

not tested on younger versions, but looking at the code they seem
impacted too

For Rbd imagebackend only, when unrescuing an instance the disk.rescue
file is actually not deleted on remote storage (only the rbd session is
destroyed)

Consequence: when rescuing instance once again, it simply ignores the
new rescue image and takes the old _disk.rescue image

Reproduce:

1. nova rescue instance

(take care that you are booted to the vda rescue disk - when rescuing
an instance from the same image it was spawned from (case by default),
since fs uuid is the same, according to your image fstab (if entry
UUID=) you can actually boot from the image you are actually trying to
rescue, but this is another matter that concerns template building)

edit rescue image disk

2. nova unrescue instance

3. nova rescue instance - you get back the disk.rescue spawned in 1

if confirmed, fix coming soon

Concerning fix several possibilities:
- nova.virt.libvirt.driver :LibvirtDriver- unrescue method, not deleting the 
correct files
or
- nova.virt.libvirt.imagebackend:Rbd - erase disk.rescue in create image 
method if already existing

Rebuild not concerned by issue, delete instance correctly deletes files
on remote storage

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- Reproduced on juno version (actually tested on a fork from 2014.2.3,  sorry 
in advance if invalid but i think the legacy version is also concerned by it)
-  
- not tested on younger versions, but looking at the code they seem impacted too
+ Reproduced on juno version (actually tested on a fork from 2014.2.3,
+ sorry in advance if invalid but i think the legacy version is also
+ concerned by it)
+ 
+ not tested on younger versions, but looking at the code they seem
+ impacted too
  
  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session is
  destroyed)
  
  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image
  
- 
  Reproduce:
  
  1. nova rescue instance
  
- (take care that you are booted to the vda rescue disk - when rescuing an 
instance from the same image it was spawned from (case by default), since fs 
uuid is the same according to fsstab of the template (entry UUID=) you can 
actually boot from the image you are actually trying to rescue, but this is 
another matter that concerns template building)
-  
+ (take care that you are booted to the vda rescue disk - when rescuing
+ an instance from the same image it was spawned from (case by default),
+ since fs uuid is the same, according to your image fstab (if entry
+ UUID=) you can actually boot from the image you are actually trying to
+ rescue, but this is another matter that concerns template building)
+ 
  edit rescue image disk
  
  2. nova unrescue instance
  
  3. nova rescue instance - you get back the disk.rescue spawned in 1
  
- 
  if confirmed, fix coming soon
  
- Concerning fix several possibilities: 
- - nova.virt.libvirt.driver :LibvirtDriver- unrescue method, not deleting the 
correct files 
+ Concerning fix several possibilities:
+ - nova.virt.libvirt.driver :LibvirtDriver- unrescue method, not deleting the 
correct files
  or
  - nova.virt.libvirt.imagebackend:Rbd - erase disk.rescue in create image 
method if already existing
  
  Rebuild not concerned by issue, delete instance correctly deletes files
  on remote storage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475652

Title:
  libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

Status in OpenStack Compute (nova):
  New

Bug description:
  Reproduced on juno version (actually tested on a fork from 2014.2.3,
  sorry in advance if invalid but i think the legacy version is also
  concerned by it)

  not tested on younger versions, but looking at the code they seem
  impacted too

  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session
  is destroyed)

  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image

  Reproduce:

  1. nova rescue instance

  (take care that you are booted to the vda rescue disk - when rescuing
  an instance from the same image it was spawned from (case by default),
  since fs uuid is the same, according to your image fstab (if entry
  UUID=) you can actually boot from the image you are actually trying to
  rescue, but this is another matter that concerns template building)

  edit rescue image disk

  2. nova unrescue instance

  3. 

[Yahoo-eng-team] [Bug 1475661] [NEW] py27 jobs failing due to mock_open after mock 1.1.4 released

2015-07-17 Thread Sylvain Bauza
Public bug reported:

2015-07-17 12:11:00.987 | ==
2015-07-17 12:11:00.987 | Failed 2 tests - output below:
2015-07-17 12:11:00.987 | ==
2015-07-17 12:11:00.987 | 
2015-07-17 12:11:00.988 | 
nova.tests.unit.virt.hyperv.test_vhdutils.VHDUtilsTestCase.test_get_vhd_format_zero_length_file
2015-07-17 12:11:00.988 | 
---
2015-07-17 12:11:00.988 | 
2015-07-17 12:11:00.988 | Captured traceback:
2015-07-17 12:11:00.988 | ~~~
2015-07-17 12:11:00.988 | Traceback (most recent call last):
2015-07-17 12:11:00.988 |   File 
nova/tests/unit/virt/hyperv/test_vhdutils.py, line 279, in 
test_get_vhd_format_zero_length_file
2015-07-17 12:11:00.988 | f.seek.assert_called_once_with(0, 2)
2015-07-17 12:11:00.988 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 947, in assert_called_once_with
2015-07-17 12:11:00.988 | raise AssertionError(msg)
2015-07-17 12:11:00.988 | AssertionError: Expected 'seek' to be called 
once. Called 0 times.
2015-07-17 12:11:00.989 | 
2015-07-17 12:11:00.989 | 
2015-07-17 12:11:00.989 | 
nova.tests.unit.virt.hyperv.test_vmops.VMOpsTestCase.test_get_console_output
2015-07-17 12:11:00.989 | 

2015-07-17 12:11:00.989 | 
2015-07-17 12:11:00.989 | Captured traceback:
2015-07-17 12:11:00.989 | ~~~
2015-07-17 12:11:00.989 | Traceback (most recent call last):
2015-07-17 12:11:00.989 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1305, in patched
2015-07-17 12:11:00.989 | return func(*args, **keywargs)
2015-07-17 12:11:00.989 |   File 
nova/tests/unit/virt/hyperv/test_vmops.py, line 981, in 
test_get_console_output
2015-07-17 12:11:00.989 | self.assertEqual(self.FAKE_LOG * 2, 
instance_log)
2015-07-17 12:11:00.990 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
2015-07-17 12:11:00.990 | self.assertThat(observed, matcher, message)
2015-07-17 12:11:00.990 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
2015-07-17 12:11:00.990 | raise mismatch_error
2015-07-17 12:11:00.990 | testtools.matchers._impl.MismatchError: 
'fake_logfake_log' != 'fake_logfake_logfake_logfake_log'

http://logs.openstack.org/57/186757/13/check/gate-nova-
python27/c75a463/console.html#_2015-07-17_12_11_00_988


Logstash is having 12 hours backlog now, unable to get a pattern.

** Affects: nova
 Importance: Critical
 Status: Confirmed


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475661

Title:
  py27 jobs failing due to mock_open after mock 1.1.4 released

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  2015-07-17 12:11:00.987 | ==
  2015-07-17 12:11:00.987 | Failed 2 tests - output below:
  2015-07-17 12:11:00.987 | ==
  2015-07-17 12:11:00.987 | 
  2015-07-17 12:11:00.988 | 
nova.tests.unit.virt.hyperv.test_vhdutils.VHDUtilsTestCase.test_get_vhd_format_zero_length_file
  2015-07-17 12:11:00.988 | 
---
  2015-07-17 12:11:00.988 | 
  2015-07-17 12:11:00.988 | Captured traceback:
  2015-07-17 12:11:00.988 | ~~~
  2015-07-17 12:11:00.988 | Traceback (most recent call last):
  2015-07-17 12:11:00.988 |   File 
nova/tests/unit/virt/hyperv/test_vhdutils.py, line 279, in 
test_get_vhd_format_zero_length_file
  2015-07-17 12:11:00.988 | f.seek.assert_called_once_with(0, 2)
  2015-07-17 12:11:00.988 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 947, in assert_called_once_with
  2015-07-17 12:11:00.988 | raise AssertionError(msg)
  2015-07-17 12:11:00.988 | AssertionError: Expected 'seek' to be called 
once. Called 0 times.
  2015-07-17 12:11:00.989 | 
  2015-07-17 12:11:00.989 | 
  2015-07-17 12:11:00.989 | 
nova.tests.unit.virt.hyperv.test_vmops.VMOpsTestCase.test_get_console_output
  2015-07-17 12:11:00.989 | 

  2015-07-17 12:11:00.989 | 
  2015-07-17 12:11:00.989 | Captured traceback:
  2015-07-17 12:11:00.989 | ~~~
  2015-07-17 12:11:00.989 | Traceback (most recent call last):
  2015-07-17 12:11:00.989 |   File 

[Yahoo-eng-team] [Bug 1475663] [NEW] Incorrect behaviour of method _check_instance_exists

2015-07-17 Thread Sergey Nikitin
Public bug reported:

This method must check instance existence in CURRENT token. But now it
checks instance existence in ANY token. It happens because parameter
token_only in sqlalchemy query was missed.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475663

Title:
  Incorrect behaviour of method _check_instance_exists

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  This method must check instance existence in CURRENT token. But now it
  checks instance existence in ANY token. It happens because parameter
  token_only in sqlalchemy query was missed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475647] [NEW] Get image API returns 500 if a body is included

2015-07-17 Thread Niall Bunting
Public bug reported:

Overview:
If a user attaches a body to a http  get to get the image data a 500 response 
is returned as the API did not expect a body.

Steps to reproduce:
(1) The curl command, which includes a body:
curl -v -X GET  
http://16.49.137.85:9292/v2/images/5619cf1f-5c43-4a1d-a90b-aa7354e453e7 -H 
X-Auth-Token: 3ee83196ecbb4a559b945ac849c1520e -d '[]'

(2) Sends the following:
GET http://16.49.137.85:9292/v2/images/5619cf1f-5c43-4a1d-a90b-aa7354e453e7 
HTTP/1.1.
User-Agent: curl/7.35.0.
Host: 16.49.137.85:9292.
Accept: */*.
Proxy-Connection: Keep-Alive.
X-Auth-Token: 3ee83196ecbb4a559b945ac849c1520e.
Content-Length: 2.
Content-Type: application/x-www-form-urlencoded.
.
[]

Actual:
The response in a 500 error.

Expected:
It may be nice if it caught the fact that a body was not expected and returned 
a error early, to stop the whole body being uploaded. I would like some input 
into what people think the expected behavior should be?

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Niall Bunting (niall-bunting)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1475647

Title:
  Get image API returns 500 if a body is included

Status in Glance:
  New

Bug description:
  Overview:
  If a user attaches a body to a http  get to get the image data a 500 response 
is returned as the API did not expect a body.

  Steps to reproduce:
  (1) The curl command, which includes a body:
  curl -v -X GET  
http://16.49.137.85:9292/v2/images/5619cf1f-5c43-4a1d-a90b-aa7354e453e7 -H 
X-Auth-Token: 3ee83196ecbb4a559b945ac849c1520e -d '[]'

  (2) Sends the following:
  GET http://16.49.137.85:9292/v2/images/5619cf1f-5c43-4a1d-a90b-aa7354e453e7 
HTTP/1.1.
  User-Agent: curl/7.35.0.
  Host: 16.49.137.85:9292.
  Accept: */*.
  Proxy-Connection: Keep-Alive.
  X-Auth-Token: 3ee83196ecbb4a559b945ac849c1520e.
  Content-Length: 2.
  Content-Type: application/x-www-form-urlencoded.
  .
  []

  Actual:
  The response in a 500 error.

  Expected:
  It may be nice if it caught the fact that a body was not expected and 
returned a error early, to stop the whole body being uploaded. I would like 
some input into what people think the expected behavior should be?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1475647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475655] [NEW] Unit_add call fails for fcp volumes when target port has not been configured

2015-07-17 Thread Stefan Amann
Public bug reported:

Linux on System z can be configured for automated port and LUN scanning. If 
both features are turned off, ports and LUNs need to be added using explicit 
calls.
While os-brick currently uses explicit calls to add LUNs, the calls for adding 
ports are missing. If an administrator does not manually issue the port_rescan 
call to add fibre-channel target ports, OpenStack will fail to add any 
fibre-channel LUN on System z.

** Affects: nova
 Importance: Undecided
 Assignee: Stefan Amann (stefan-amann)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Stefan Amann (stefan-amann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475655

Title:
  Unit_add call fails for fcp volumes when target port has not been
  configured

Status in OpenStack Compute (nova):
  New

Bug description:
  Linux on System z can be configured for automated port and LUN scanning. If 
both features are turned off, ports and LUNs need to be added using explicit 
calls.
  While os-brick currently uses explicit calls to add LUNs, the calls for 
adding ports are missing. If an administrator does not manually issue the 
port_rescan call to add fibre-channel target ports, OpenStack will fail to add 
any fibre-channel LUN on System z.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190149] Re: Token auth fails when token is larger than 8k

2015-07-17 Thread Serg Melikyan
** Also affects: murano
   Importance: Undecided
   Status: New

** Changed in: murano
   Importance: Undecided = High

** Changed in: murano
   Status: New = Confirmed

** Changed in: murano
Milestone: None = liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190149

Title:
  Token auth fails when token is larger than 8k

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Committed
Status in Glance:
  Fix Released
Status in Glance havana series:
  Fix Committed
Status in heat:
  Fix Released
Status in heat havana series:
  Fix Committed
Status in Keystone:
  Fix Released
Status in murano:
  Confirmed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in Trove:
  Fix Released

Bug description:
  The following tests fail when there are 8 or more endpoints registered with 
keystone 
  tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token 
  tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token

  Steps to reproduce:
  - run devstack with the following services (the heat h-* apis push the 
endpoint count over the threshold

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,tempest,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,heat,h-api,h-api-cfn,h-api-cw,h-eng,n-net
  - run the failing tempest tests, eg
testr run test_v3_token
  - results in the following errors:
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File tempest/api/compute/test_auth_token.py, line 48, in test_v3_token
  self.servers_v3.list_servers()
File tempest/services/compute/json/servers_client.py, line 138, in 
list_servers
  resp, body = self.get(url)
File tempest/common/rest_client.py, line 269, in get
  return self.request('GET', url, headers)
File tempest/common/rest_client.py, line 394, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File tempest/common/rest_client.py, line 327, in _parse_resp
  return json.loads(body)
File /usr/lib64/python2.7/json/__init__.py, line 326, in loads
  return _default_decoder.decode(s)
File /usr/lib64/python2.7/json/decoder.py, line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File /usr/lib64/python2.7/json/decoder.py, line 384, in raw_decode
  raise ValueError(No JSON object could be decoded)
  ValueError: No JSON object could be decoded
  ==
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File tempest/api/compute/test_auth_token.py, line 48, in test_v3_token
  self.servers_v3.list_servers()
File tempest/services/compute/xml/servers_client.py, line 181, in 
list_servers
  resp, body = self.get(url, self.headers)
File tempest/common/rest_client.py, line 269, in get
  return self.request('GET', url, headers)
File tempest/common/rest_client.py, line 394, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File tempest/common/rest_client.py, line 519, in _parse_resp
  return xml_to_json(etree.fromstring(body))
File lxml.etree.pyx, line 2993, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:63285)
File parser.pxi, line 1617, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:93571)
File parser.pxi, line 1495, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:92370)
File parser.pxi, line 1011, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:89010)
File parser.pxi, line 577, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:84711)
File parser.pxi, line 676, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:85816)
File parser.pxi, line 627, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:85308)
  XMLSyntaxError: None
  Ran 2 tests in 2.497s (+0.278s)
  FAILED (id=214, failures=2)

  - run keystone endpoint-delete on endpoints until there is 7 endpoints
  - failing tests should now pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1190149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1318544] Re: XenServer - Nova-Compute StorageError Waiting for device

2015-07-17 Thread Bob Ball
** Description changed:

  Hi All,
  
  I started building a Openstack cloud based on the new LTS version of
  Ubuntu (14.04). I installed both Control and Compute nodes as VM's on a
  XenServer. I selected the option 'Other Operating System' so this is
  running as HVM.
  
  The cluster is up and running: i can start instances, create storage and
  allocate IP addresses. But the error i got is popping up when the
  instance is in spawning state. The error i attached is from the nova-
  compute node.
  
- I use a IBM storwize backend with Cinder connection using iSCSI. I can
- see the instances being created, the storage being connected but the
- following error is keep coming back.
+ ...
  
- Locally on the XenServer SDB and SDC existed, which leaded to a earlier
- error in which HDB was HDC. I removed SDC locally from the XenServer
- using: echo 1  /sys/block/sdc/device/delete. This changed the HDC
- error message to HDB. The same thing i did for SDB. Both where virtual
- drives from DRAC. But still this message is keep popping up with HDB.
- 
- I read an other article about a similar error, this guy was pointed in
- the direction of a HVM vs PV solution because the VM is communicating
- with the DOM0 via it's kernel. I checked and after installing the
- XenServer tools the UUID is there and is readable.
- 
- While trying the solution above i also focused on the HDB reference,
- this was strange because with all the rewrite options in nova i would
- expect a reference to xvda: xenapi_remap_vbd_dev=false
- 
- Could you guys point me in the right direction or help me solve this
- problem? I think this isn't a mis-configuration because i tried all
- possible configurations and this message keeps popping up.
- 
-  This is the last step towards a working openstack cloud with Ubuntu
- 14.04 and XenServer 6.2 all patches applied.
- 
- 2014-05-12 09:00:30.918 6660 ERROR nova.compute.manager 
[req-595bc50b-e8b2-4de4-8c38-09887c8d4c82 f8609ec1f5254fcfb1fdf6df76876805 
41971b840c7a405d9da5052a2506a1c9] [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] Error: Timeout waiting for device hdb to 
be created
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] Traceback (most recent call last):
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1311, in 
_build_instance
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] set_access_ip=set_access_ip)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 399, in 
decorated_function
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] return function(self, context, *args, 
**kwargs)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1723, in _spawn
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] six.reraise(self.type_, self.value, 
self.tb)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1720, in _spawn
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] block_device_info)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py, line 230, in 
spawn
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] admin_password, network_info, 
block_device_info)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py, line 357, in spawn
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] network_info, block_device_info, 
name_label, rescue)
- 2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py, line 526, in 
_spawn
- 2014-05-12 09:00:30.918 6660 

[Yahoo-eng-team] [Bug 1475717] [NEW] Security Rules should support VRRP protocol

2015-07-17 Thread German Eichberger
Public bug reported:

We are following http://blog.aaronorosen.com/implementing-high-
availability-instances-with-neutron-using-vrrp/ to set up two service
vms as an active-standby pair using VRRP for the Octavia project. While
doing so we noticed that all VRRP packets were blocked and the protocol
is not supported by the current security groups. Since that will gain
more momentum with the NFV story we propose to add this additional
protocol to security groups.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475717

Title:
  Security Rules should support VRRP protocol

Status in neutron:
  New

Bug description:
  We are following http://blog.aaronorosen.com/implementing-high-
  availability-instances-with-neutron-using-vrrp/ to set up two service
  vms as an active-standby pair using VRRP for the Octavia project.
  While doing so we noticed that all VRRP packets were blocked and the
  protocol is not supported by the current security groups. Since that
  will gain more momentum with the NFV story we propose to add this
  additional protocol to security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460222] Re: Add 'labels', a list of opaque strings, to the neutron 'port' object

2015-07-17 Thread Kyle Mestery
Per the RFE decoder ring [1], marking this as Won't Fix since we've
rejected it.

[1] https://review.openstack.org/#/c/202797/

** Changed in: neutron
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460222

Title:
  Add 'labels', a list of opaque strings, to the neutron 'port' object

Status in neutron:
  Won't Fix

Bug description:
  *What is being requested*

  This request is to add an attribute 'labels' to the neutron 'port'
  object.  The type of 'labels' is a list of strings.

  *Why is it being requested*

  There are many use cases in which you wish to provide hint-by-
  reference to downstream providers of neutron services (like
  controllers) for which you would like to allow the user to provide
  hint-by-reference about the nature of the port and the service it
  should receive.

  In the parlance of the Neutron API Documentation:

  Object: ports
  Parameter: labels
  Style: plain
  Type: xsd:list
  Description: A list of opaque strings to be interpreted as 
   hints-by-reference by the neutron provider

  
  *Examples*

  1)  Indicate the Network Policy that should be applied to the port.  Most 
network policy systems resolve policy based upon the
  'Endpoint Group(s)' (abbreviated EPGs) and Endpoint (think port) (abbreviated 
EP) is placed in.  'labels' could be used to indicate EPG membership for a 
port.  For example:

  {
    port: {
    labels: [ epg:blue, epg:green ],
     ...
     }
  }

  2)  Indicate the type of Virtual Network Function (VNF) of the VM
  being deployed on the port.  This would be important for a controller
  rendering Service Function Chains (SFCs) to know so that it knew it
  had additional VNFs of that type to use in the SFCs it was
  constructing.  For example:

  {
    port: {
    labels: [ vnf-type:firewall-3 ],
     ...
     }
  }

  *Who needs it*

  This is needed to assist the OpenDaylight Controller in being able to
  better provide the Policy and SFC services.  However, this mechanism
  is agnostic as to the underlying controller, and also can be used for
  a variety of other useful purposes.  Anywhere that it would be useful
  to pass on hints-by-reference to downstream providers could take
  advantage of it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287757] Re: Optimization: Don't prune events on every get

2015-07-17 Thread Morgan Fainberg
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Also affects: keystone/liberty
   Importance: High
 Assignee: Morgan Fainberg (mdrnstm)
   Status: In Progress

** Changed in: keystone/kilo
   Status: New = Triaged

** Changed in: keystone/kilo
   Importance: Undecided = High

** Changed in: keystone/kilo
 Assignee: (unassigned) = Morgan Fainberg (mdrnstm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1287757

Title:
  Optimization:  Don't prune events on every get

Status in Keystone:
  In Progress
Status in Keystone kilo series:
  Triaged
Status in Keystone liberty series:
  In Progress

Bug description:
  _prune_expired_events_and_get always locks the backend. Store the time
  of the oldest event so that the prune process can be skipped if none
  of the events have timed out.

  (decided at keystone midcycle - 2015/07/17) -- MorganFainberg
  The easiest solution is to do the prune on issuance of new revocation event 
instead on the get.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1287757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475570] [NEW] Horizon fails getting container lists

2015-07-17 Thread Giuseppe Paterno'
Public bug reported:

Env: Centos7, Kilo install

When a user logs in to horizon, he is able to see the object store.
Can do standard operations, but after a while he got error message Error 
getting container lists when accessing the object store via horizon. Other 
access (such as glance) is not affected.
The error in the log file refers somehow to the authentication:



[Thu Jul 16 13:51:25.567348 2015] [:error] [pid 8912] Deleted Object: 
CentOS-7-x86_64-Minimal-1503-01.iso
[Thu Jul 16 13:53:26.124737 2015] [:error] [pid 8912] REQ: curl -i 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=jsonlimit=1001
 -X GET -H X-Auth-Token: 00ef836291544eff88ecc32af3d96a28
[Thu Jul 16 13:53:26.124811 2015] [:error] [pid 8912] RESP STATUS: 401 
Unauthorized
[Thu Jul 16 13:53:26.124882 2015] [:error] [pid 8912] RESP HEADERS: [('date', 
'Thu, 16 Jul 2015 14:51:25 GMT'), ('content-length', '131'), ('content-type', 
'text/html; charset=UTF-8'), ('www-authenticate', 'Swift 
realm=AUTH_845391df95974ed7ac02755b493afb05, Keystone 
uri=\\'http://keystone:5000/v2.0\\''), ('x-trans-id', 
'tx6cd52d254a0c43a8b6cda-0055a7c4ed')]
[Thu Jul 16 13:53:26.124929 2015] [:error] [pid 8912] RESP BODY: 
htmlh1Unauthorized/h1pThis server could not verify that you are 
authorized to access the document you requested./p/html
[Thu Jul 16 13:53:26.125459 2015] [:error] [pid 8912] Account GET failed: 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=jsonlimit=1001
 401 Unauthorized  [first 60 chars of response] 
htmlh1Unauthorized/h1pThis server could not verify t
[Thu Jul 16 13:53:26.125472 2015] [:error] [pid 8912] Traceback (most recent 
call last):
[Thu Jul 16 13:53:26.125475 2015] [:error] [pid 8912]   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 1261, in _retry
[Thu Jul 16 13:53:26.125478 2015] [:error] [pid 8912] rv = func(self.url, 
self.token, *args, **kwargs)
[Thu Jul 16 13:53:26.125481 2015] [:error] [pid 8912]   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 474, in 
get_account
[Thu Jul 16 13:53:26.125483 2015] [:error] [pid 8912] end_marker, http_conn)
[Thu Jul 16 13:53:26.125485 2015] [:error] [pid 8912]   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 509, in 
get_account
[Thu Jul 16 13:53:26.125501 2015] [:error] [pid 8912] 
http_response_content=body)
[Thu Jul 16 13:53:26.125504 2015] [:error] [pid 8912] ClientException: Account 
GET failed: 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=jsonlimit=1001
 401 Unauthorized  [first 60 chars of response] 
htmlh1Unauthorized/h1pThis server could not verify t
[Thu Jul 16 13:53:26.125740 2015] [:error] [pid 8912] Recoverable error: 
Account GET failed: 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=jsonlimit=1001
 401 Unauthorized  [first 60 chars of response] 
htmlh1Unauthorized/h1pThis server could not verify t
[Thu Jul 16 13:53:26.125971 2015] [:error] [pid 8912] No tenant specified
[Thu Jul 16 13:53:26.125979 2015] [:error] [pid 8912] Traceback (most recent 
call last):
[Thu Jul 16 13:53:26.125982 2015] [:error] [pid 8912]   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 1253, in _retry
[Thu Jul 16 13:53:26.125985 2015] [:error] [pid 8912] self.url, self.token 
= self.get_auth()
[Thu Jul 16 13:53:26.125987 2015] [:error] [pid 8912]   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 1227, in get_auth
[Thu Jul 16 13:53:26.125989 2015] [:error] [pid 8912] 
insecure=self.insecure)
[Thu Jul 16 13:53:26.125992 2015] [:error] [pid 8912]   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 413, in get_auth
[Thu Jul 16 13:53:26.125994 2015] [:error] [pid 8912] raise 
ClientException('No tenant specified')
[Thu Jul 16 13:53:26.125996 2015] [:error] [pid 8912] ClientException: No 
tenant specified
[Thu Jul 16 13:53:26.126167 2015] [:error] [pid 8912] Recoverable error: No 
tenant specified




Glance, that turns to swift, is working fine.
Also swift from command line is working fine, though I need to force V2 in the 
cmdline (swift -V 2 .).

Any hint?
Thanks,
  Giuseppe

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon-core kilo swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475570

Title:
  Horizon fails getting container lists

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: Centos7, Kilo install

  When a user logs in to horizon, he is able to see the object store.
  Can do standard operations, but after a while he got error message Error 
getting container lists when accessing the object store 

[Yahoo-eng-team] [Bug 1474074] Re: PciDeviceList is not versioned properly in liberty and kilo

2015-07-17 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474074

Title:
  PciDeviceList is not versioned properly in liberty and kilo

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  The following commit:

  https://review.openstack.org/#/c/140289/4/nova/objects/pci_device.py

  missed to bump the PciDeviceList version.

  We should do it now (master @ 4bfb094) and backport this to stable
  Kilo as well

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456684] Re: [SRU] cloud-init does not know central (eu-central-1) is a direction for ec2 zones

2015-07-17 Thread Ben Howard
** Summary changed:

- does not know central (eu-central-1) is a direction for ec2 zones
+ [SRU] cloud-init does not know central (eu-central-1) is a direction for ec2 
zones

** Also affects: cloud-init (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1456684

Title:
  [SRU] cloud-init does not know central (eu-central-1) is a direction
  for ec2 zones

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Vivid:
  New

Bug description:
  cloud-init's code that tries to determine if it is in a ec2 region is
  simply unaware of the 'central' direction.

  [Impact]
  Ubuntu instances launched in the eu-central-1 EC2 region will not discover 
region-local mirrors. The user will see a degraded experience as they have to 
interact with mirrors that are further away and under heavier load. They will 
also potentially have to pay for the bandwidth used this way (as will the 
default mirror admins).

  [Test Case]
  Launch an instance in eu-central-1 and examine the apt sources used.  They 
should be of the form eu-central-1.ec2.archive.ubuntu.com instead of 
eu-central-1a.clouds.archive.ubuntu.com.

  [Regression Potential]
  Limited; the changes are limited to the mirror discovery, and if mirror 
discovery fails, we have default mirrors that we will use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1456684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475737] [NEW] requirements.txt includes unnecessary oslo.vmware

2015-07-17 Thread Matthew Edmonds
Public bug reported:

olso_vmware is not referenced in glance python code, yet
requirements.txt includes it. This should either be removed from
requirements entirely, or moved to test-requirements.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1475737

Title:
  requirements.txt includes unnecessary oslo.vmware

Status in Glance:
  New

Bug description:
  olso_vmware is not referenced in glance python code, yet
  requirements.txt includes it. This should either be removed from
  requirements entirely, or moved to test-requirements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1475737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475740] [NEW] For instances spawned by Nova on ESX compute, there is no link in /dev/disk/by-id for SCSI (sdx) devices

2015-07-17 Thread Sunitha K
Public bug reported:

By default VMWare doesn't provide information needed by udev to generate 
/dev/disk/by-id. Hence for instances spawned by Nova on ESX compute, there is 
no link in /dev/disk/by-id for SCSI (sdx) devices.
The property disk.EnableUUID needs to be enabled by default for instances 
spawned by Nova.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475740

Title:
  For instances spawned by Nova on ESX compute, there is no link in
  /dev/disk/by-id for SCSI (sdx) devices

Status in OpenStack Compute (nova):
  New

Bug description:
  By default VMWare doesn't provide information needed by udev to generate 
/dev/disk/by-id. Hence for instances spawned by Nova on ESX compute, there is 
no link in /dev/disk/by-id for SCSI (sdx) devices.
  The property disk.EnableUUID needs to be enabled by default for instances 
spawned by Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475811] [NEW] gate-tempest-dsvm-full-ceph fails to upload image with TypeError: an integer is required

2015-07-17 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/81/173681/5/gate/gate-tempest-dsvm-full-
ceph/c9e0bee/logs/screen-g-api.txt.gz?level=TRACE#_2015-07-17_22_11_21_776

2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client 
[req-e89cafe3-53ad-4c20-aead-f649429e190c cecb48ef19614b36a4486bd1ff37878a 
8f02d993930b440da69ebd54bb2ff023 - - -] Registry client request GET 
/images/cirros-0.3.4-x86_64-uec-kernel raised NotFound
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client Traceback 
(most recent call last):
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client   File 
/opt/stack/new/glance/glance/registry/client/v1/client.py, line 121, in 
do_request
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client 
**kwargs)
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client   File 
/opt/stack/new/glance/glance/common/client.py, line 71, in wrapped
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client return 
func(self, *args, **kwargs)
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client   File 
/opt/stack/new/glance/glance/common/client.py, line 377, in do_request
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client 
headers=copy.deepcopy(headers))
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client   File 
/opt/stack/new/glance/glance/common/client.py, line 88, in wrapped
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client return 
func(self, method, url, body, headers)
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client   File 
/opt/stack/new/glance/glance/common/client.py, line 523, in _do_request
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client raise 
exception.NotFound(res.read())
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client NotFound: 
404 Not Found
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client 
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client The 
resource could not be found.
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client 
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client
2015-07-17 22:11:21.220 4347 ERROR glance.registry.client.v1.client 
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils 
[req-2ec74006-5b3c-4ca7-a392-363887c2219c cecb48ef19614b36a4486bd1ff37878a 
8f02d993930b440da69ebd54bb2ff023 - - -] Failed to upload image 
9cdde494-924b-42e3-aaad-1e9f0144fad8
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils Traceback (most 
recent call last):
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils   File 
/opt/stack/new/glance/glance/api/v1/upload_utils.py, line 112, in 
upload_data_to_store
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils 
context=req.context)
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/backend.py, line 340, in 
store_add_to_backend
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils 
context=context)
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/capabilities.py, line 
226, in op_checker
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils return 
store_op_fun(store, *args, **kwargs)
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/_drivers/rbd.py, line 
352, in add
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils image_size, 
order)
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/_drivers/rbd.py, line 
271, in _create_image
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils 
features=features)
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils   File 
/usr/lib/python2.7/dist-packages/rbd.py, line 209, in create
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils 
c_uint64(features),
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils TypeError: an 
integer is required
2015-07-17 22:11:21.776 4347 ERROR glance.api.v1.upload_utils 

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVHlwZUVycm9yOiBhbiBpbnRlZ2VyIGlzIHJlcXVpcmVkXCIgQU5EIHRhZ3M6XCJzY3JlZW4tZy1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzcxNzUyNjc1Mzd9

Started today, 156 hits, check and gate, all failures.

** Affects: glance
 Importance: Critical
 Assignee: Matt Riedemann (mriedem)
 Status: Confirmed

** Changed in: glance
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1475811

Title:
  gate-tempest-dsvm-full-ceph fails to 

[Yahoo-eng-team] [Bug 1468564] Re: remove unnecessary executable bit of the source files

2015-07-17 Thread Ren Qiaowei
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468564

Title:
  remove unnecessary executable bit of the source files

Status in Glance:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  In Progress
Status in neutron:
  In Progress

Bug description:
  Bunch of glance source code files are marked as executable which is
  not appropriate, e.g. glance/search/api/__init__.py, etc/glance-
  search-paste.ini, etc.

  We need to chmod -x for for them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1468564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474933] Re: Nova compute interprets rabbitmq passwords

2015-07-17 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: New = Invalid

** Changed in: oslo.messaging
   Status: New = Invalid

** Changed in: nova
   Status: Invalid = In Progress

** Changed in: oslo.messaging
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474933

Title:
  Nova compute interprets rabbitmq passwords

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.messaging:
  In Progress

Bug description:
  Using the kilo rpms - openstack-nova-compute-2015.1.0-3.el7.noarch

  If the rabbit_password (set in [Default] section - this is how the
  Ansible role I am using sets it) includes a slash character - / -
  then the service fails to start.

  In the log - /var/log/nova/nova-compute.log
  the following error is seen:-

  CRITICAL nova [req-72c0fe29-f2d6-4164-95de-e9e8f50fa7bc - - - - -]
  ValueError: invalid literal for int() with base 10: 'prefix'

  where prefix is the first part of the password - ie
rabbit_password = 'prefix/suffix'

  Traceback enclosed below.

  If the Rabbit password is changed to not include a / then the service
  starts up OK

  This could have security implications, but I am not currently flagging
  it as a security issue

  2015-07-15 16:28:50.824 9670 TRACE nova Traceback (most recent call last):
  2015-07-15 16:28:50.824 9670 TRACE nova   File /usr/bin/nova-compute, line 
10, in module
  2015-07-15 16:28:50.824 9670 TRACE nova sys.exit(main())
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/cmd/compute.py, line 72, in main
  2015-07-15 16:28:50.824 9670 TRACE nova 
db_allowed=CONF.conductor.use_local)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 277, in create
  2015-07-15 16:28:50.824 9670 TRACE nova db_allowed=db_allowed)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 157, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova 
self.conductor_api.wait_until_ready(context.get_admin_context())
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 292, in 
wait_until_ready
  2015-07-15 16:28:50.824 9670 TRACE nova timeout=timeout)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/baserpc.py, line 62, in ping
  2015-07-15 16:28:50.824 9670 TRACE nova return cctxt.call(context, 
'ping', arg=arg_p)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
  2015-07-15 16:28:50.824 9670 TRACE nova retry=self.retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
  2015-07-15 16:28:50.824 9670 TRACE nova timeout=timeout, retry=retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
  2015-07-15 16:28:50.824 9670 TRACE nova retry=retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
312, in _send
  2015-07-15 16:28:50.824 9670 TRACE nova msg.update({'_reply_q': 
self._get_reply_q()})
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
283, in _get_reply_q
  2015-07-15 16:28:50.824 9670 TRACE nova conn = 
self._get_connection(rpc_amqp.PURPOSE_LISTEN)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
274, in _get_connection
  2015-07-15 16:28:50.824 9670 TRACE nova purpose=purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py, line 121, 
in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova self.connection = 
connection_pool.create(purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py, line 93, in 
create
  2015-07-15 16:28:50.824 9670 TRACE nova return 
self.connection_cls(self.conf, self.url, purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py, line 
664, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova 
heartbeat=self.driver_conf.heartbeat_timeout_threshold)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/kombu/connection.py, line 180, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova params.update(parse_url(hostname))
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/kombu/utils/url.py, line 34, in parse_url
  2015-07-15 

[Yahoo-eng-team] [Bug 1468564] Re: remove unnecessary executable bit of the source files

2015-07-17 Thread Ren Qiaowei
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New = In Progress

** Changed in: neutron
 Assignee: (unassigned) = Ren Qiaowei (qiaowei-ren)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468564

Title:
  remove unnecessary executable bit of the source files

Status in Glance:
  Fix Committed
Status in Keystone:
  In Progress
Status in neutron:
  In Progress

Bug description:
  Bunch of glance source code files are marked as executable which is
  not appropriate, e.g. glance/search/api/__init__.py, etc/glance-
  search-paste.ini, etc.

  We need to chmod -x for for them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1468564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474933] Re: Nova compute interprets rabbitmq passwords

2015-07-17 Thread Davanum Srinivas (DIMS)
Should have marked invalid only in Nova as the correct fix is in
oslo.messaging

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474933

Title:
  Nova compute interprets rabbitmq passwords

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  In Progress

Bug description:
  Using the kilo rpms - openstack-nova-compute-2015.1.0-3.el7.noarch

  If the rabbit_password (set in [Default] section - this is how the
  Ansible role I am using sets it) includes a slash character - / -
  then the service fails to start.

  In the log - /var/log/nova/nova-compute.log
  the following error is seen:-

  CRITICAL nova [req-72c0fe29-f2d6-4164-95de-e9e8f50fa7bc - - - - -]
  ValueError: invalid literal for int() with base 10: 'prefix'

  where prefix is the first part of the password - ie
rabbit_password = 'prefix/suffix'

  Traceback enclosed below.

  If the Rabbit password is changed to not include a / then the service
  starts up OK

  This could have security implications, but I am not currently flagging
  it as a security issue

  2015-07-15 16:28:50.824 9670 TRACE nova Traceback (most recent call last):
  2015-07-15 16:28:50.824 9670 TRACE nova   File /usr/bin/nova-compute, line 
10, in module
  2015-07-15 16:28:50.824 9670 TRACE nova sys.exit(main())
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/cmd/compute.py, line 72, in main
  2015-07-15 16:28:50.824 9670 TRACE nova 
db_allowed=CONF.conductor.use_local)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 277, in create
  2015-07-15 16:28:50.824 9670 TRACE nova db_allowed=db_allowed)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 157, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova 
self.conductor_api.wait_until_ready(context.get_admin_context())
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 292, in 
wait_until_ready
  2015-07-15 16:28:50.824 9670 TRACE nova timeout=timeout)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/baserpc.py, line 62, in ping
  2015-07-15 16:28:50.824 9670 TRACE nova return cctxt.call(context, 
'ping', arg=arg_p)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
  2015-07-15 16:28:50.824 9670 TRACE nova retry=self.retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
  2015-07-15 16:28:50.824 9670 TRACE nova timeout=timeout, retry=retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
  2015-07-15 16:28:50.824 9670 TRACE nova retry=retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
312, in _send
  2015-07-15 16:28:50.824 9670 TRACE nova msg.update({'_reply_q': 
self._get_reply_q()})
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
283, in _get_reply_q
  2015-07-15 16:28:50.824 9670 TRACE nova conn = 
self._get_connection(rpc_amqp.PURPOSE_LISTEN)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
274, in _get_connection
  2015-07-15 16:28:50.824 9670 TRACE nova purpose=purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py, line 121, 
in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova self.connection = 
connection_pool.create(purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py, line 93, in 
create
  2015-07-15 16:28:50.824 9670 TRACE nova return 
self.connection_cls(self.conf, self.url, purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py, line 
664, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova 
heartbeat=self.driver_conf.heartbeat_timeout_threshold)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/kombu/connection.py, line 180, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova params.update(parse_url(hostname))
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/kombu/utils/url.py, line 34, in parse_url
  2015-07-15 16:28:50.824 9670 TRACE nova scheme, host, port, user, 
password, path, query = _parse_url(url)
  

[Yahoo-eng-team] [Bug 1475786] [NEW] Cannot ping to a same subnet VM via floating IP

2015-07-17 Thread Kahou Lei
Public bug reported:

Suppose I have two VMs running and they are under the same subnet, they
are assigned with the floating IPs.  (See attached image). I am using
nova network model.

I cannot get ping working if I ping from one VM to another VM via the
floating IP. Ping to another vm which resides in another subnet via
floating IP seems fine.

I did some investigation, looks like the packet is being dropped after
the PREROUTING rules. Here is the modprobe iptable log:

Jul 17 10:15:40 localhost kernel: [ 1846.629048] TRACE: raw:PREROUTING:rule:2 
IN=br100 OUT= PHYSIN=vlan100 MAC=fa:16:3e:c2:b9:7d:fa:16:3e:dd:e7:c9:08:00 
SRC=10.0.0.3 DST=172.24.4.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=3620 DF 
PROTO=ICMP TYPE=8 CODE=0 ID=8705 SEQ=0 
Jul 17 10:15:40 localhost kernel: [ 1846.629055] TRACE: raw:PREROUTING:policy:3 
IN=br100 OUT= PHYSIN=vlan100 MAC=fa:16:3e:c2:b9:7d:fa:16:3e:dd:e7:c9:08:00 
SRC=10.0.0.3 DST=172.24.4.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=3620 DF 
PROTO=ICMP TYPE=8 CODE=0 ID=8705 SEQ=0 
Jul 17 10:15:40 localhost kernel: [ 1846.629063] TRACE: 
mangle:PREROUTING:policy:1 IN=br100 OUT= PHYSIN=vlan100 
MAC=fa:16:3e:c2:b9:7d:fa:16:3e:dd:e7:c9:08:00 SRC=10.0.0.3 DST=172.24.4.2 
LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=3620 DF PROTO=ICMP TYPE=8 CODE=0 ID=8705 
SEQ=0 
Jul 17 10:15:40 localhost kernel: [ 1846.629068] TRACE: nat:PREROUTING:rule:1 
IN=br100 OUT= PHYSIN=vlan100 MAC=fa:16:3e:c2:b9:7d:fa:16:3e:dd:e7:c9:08:00 
SRC=10.0.0.3 DST=172.24.4.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=3620 DF 
PROTO=ICMP TYPE=8 CODE=0 ID=8705 SEQ=0 
Jul 17 10:15:40 localhost kernel: [ 1846.629074] TRACE: 
nat:nova-network-PREROUTING:rule:3 IN=br100 OUT= PHYSIN=vlan100 
MAC=fa:16:3e:c2:b9:7d:fa:16:3e:dd:e7:c9:08:00 SRC=10.0.0.3 DST=172.24.4.2 
LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=3620 DF PROTO=ICMP TYPE=8 CODE=0 ID=8705 
SEQ=0 

And from the iptables counter, nothing got incremented after the
PREROUTING rule:

sudo iptables -t nat -L -v -n 
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   destination 

184 nova-network-PREROUTING  all  --  *  *   0.0.0.0/0  
  0.0.0.0/0   
0 0 nova-api-PREROUTING  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   destination 


Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   destination 

0 0 nova-network-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0   
0 0 nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   destination 

0 0 nova-network-POSTROUTING  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0   
0 0 nova-api-POSTROUTING  all  --  *  *   0.0.0.0/0
0.0.0.0/0   
0 0 nova-postrouting-bottom  all  --  *  *   0.0.0.0/0  
  0.0.0.0/0   

Chain nova-api-OUTPUT (1 references)
 pkts bytes target prot opt in out source   destination 


Chain nova-api-POSTROUTING (1 references)
 pkts bytes target prot opt in out source   destination 


Chain nova-api-PREROUTING (1 references)
 pkts bytes target prot opt in out source   destination 


Chain nova-api-float-snat (1 references)
 pkts bytes target prot opt in out source   destination 


Chain nova-api-snat (1 references)
 pkts bytes target prot opt in out source   destination 

0 0 nova-api-float-snat  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

Chain nova-network-OUTPUT (1 references)
 pkts bytes target prot opt in out source   destination 

0 0 DNAT   all  --  *  *   0.0.0.0/0172.24.4.1  
 to:11.0.0.3
0 0 DNAT   all  --  *  *   0.0.0.0/0172.24.4.2  
 to:10.0.0.4
0 0 DNAT   all  --  *  *   0.0.0.0/0172.24.4.3  
 to:10.0.0.3
0 0 DNAT   all  --  *  *   0.0.0.0/0172.24.4.4  
 to:11.0.0.4

Chain nova-network-POSTROUTING (1 references)
 pkts bytes target prot opt in out source   destination 

0 0 ACCEPT all  --  *  *   10.0.0.0/24  
192.168.62.100  
0 0 ACCEPT all  --  *  *   10.0.0.0/24  10.0.0.0/24 
 ! ctstate DNAT
0 0 ACCEPT all  --  *  *   11.0.0.0/24  
192.168.62.100  
0 0 ACCEPT all  --  *  *   11.0.0.0/24  

[Yahoo-eng-team] [Bug 1475804] [NEW] check_migration is broken for branch-less migration directories

2015-07-17 Thread Ihar Hrachyshka
Public bug reported:

I3823900bc5aaf7757c37edb804027cf4d9c757ab introduced support for multi-
branch migration directories in neutron-db-manage. That broke
check_migration for those projects without multiple branches. The tool
should properly handle both types of directories for forseable future.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475804

Title:
  check_migration is broken for branch-less migration directories

Status in neutron:
  New

Bug description:
  I3823900bc5aaf7757c37edb804027cf4d9c757ab introduced support for
  multi-branch migration directories in neutron-db-manage. That broke
  check_migration for those projects without multiple branches. The tool
  should properly handle both types of directories for forseable future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475792] [NEW] Change Neutron so that it can auto-allocate networks

2015-07-17 Thread Brian Haley
Public bug reported:

As part of the get me a network work outlined in
https://review.openstack.org/#/c/196803/ we need to be able to auto-
allocate Neutron networks for tenants.  This is an RFE for those
changes.

The current Neutron v2 api calls that Nova makes to neutron live in
nova/network/neutronv2/api.py - specifically in
_get_available_networks().  In the 'nova boot' case where no network has
been specified, it makes two calls to neutron:

# (1) Retrieve non-public network list owned by the tenant.
search_opts = {'tenant_id': project_id, 'shared': False}
nets = neutron.list_networks(**search_opts).get('networks', [])
# (2) Retrieve public network list.
search_opts = {'shared': True}
nets += neutron.list_networks(**search_opts).get('networks', []) 

The first will get any existing networks with this tenant_id, the second
any shared networks the admin as pre-configured.  If nothing exists the
boot will error out and fail.

I'm proposing a new API, tentatively called retrieve_networks, that
while similar to list_networks, can also be given additional
information such as a flag parameter, that can signal neutron to auto-
allocate a network and return it to the caller.  This will be created as
an extension, such that a caller could determine if it is supported
before calling it (or falling-back to the old method).

The arguments to it can either be similar to list_networks, or
something like { ids: [], tenant_id: None, flags: [] }, where flags can
be one or more values such as:

SHARED - return any shared networks
ALLOCATE - if no network exists, auto-allocate one based on config settings

This new API call would only need to be used from the call to
_get_available_networks() in allocate_for_instance(), and not from the
others.

This could either be a single-step process, where a single POST is done,
or a two-step process, such that a GET is used first.  We need to ask
the API working group what the current recommendation would be.  An
alternative approach would be to put this logic in a library that Nova
can call, rather than baking it into Nova.

The neutron configuration will be done in a new database table, such
that updating config files and restarting services are not required.
Initially, this will be just a few variables:

 - network name to use (private_network)
 - subnet name to use (private_subnet)
 - subnet cidr to use (10.0.0.0/24)
 - router name to use (private_router)
 - external network to attach router to (must be a UUID ?)

This information - names and CIDR range for the initial subnet, can be
the same for every tenant since they are private networks and over-
lapping IPs are allowed in this case.  The recommendation would be to
use an address range in the RFC 1918 space unless otherwise specified.

A future enhancement could be to use subnet_pools for this.

In order to eliminate duplicate default networks being created, the
database layer MUST use some sort of distributed locking (based on
tenant_id) such that simultaneous calls to different neutron API servers
for this new resource do not both succeed.  The preferred outcome is for
the second (and subsequent) calls to block, and return the network
allocated by the first.

So it's in one place, the details that still need some ironing are:

1) Get recommendation from API working group
2) DB schema
3) DB locking
4) Buy-in from Nova, as this affects the nova-neutron API
5) Type of beer desired for first landed patch :)  It's Friday here people!

Please feel free to comment!

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: get-me-a-network rfe

** Changed in: neutron
 Assignee: (unassigned) = Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475792

Title:
  Change Neutron so that it can auto-allocate networks

Status in neutron:
  New

Bug description:
  As part of the get me a network work outlined in
  https://review.openstack.org/#/c/196803/ we need to be able to auto-
  allocate Neutron networks for tenants.  This is an RFE for those
  changes.

  The current Neutron v2 api calls that Nova makes to neutron live in
  nova/network/neutronv2/api.py - specifically in
  _get_available_networks().  In the 'nova boot' case where no network
  has been specified, it makes two calls to neutron:

  # (1) Retrieve non-public network list owned by the tenant.
  search_opts = {'tenant_id': project_id, 'shared': False}
  nets = neutron.list_networks(**search_opts).get('networks', [])
  # (2) Retrieve public network list.
  search_opts = {'shared': True}
  nets += neutron.list_networks(**search_opts).get('networks', []) 

  The first will get any existing networks with this tenant_id, the
  second 

[Yahoo-eng-team] [Bug 1475796] Re: using pysaml2 version 3.0.0 breaks keystone in kilo release 2015.1.0

2015-07-17 Thread Dolph Mathews
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Tags removed: kilo-backport-potential

** Changed in: keystone/kilo
   Status: New = Triaged

** Changed in: keystone/kilo
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1475796

Title:
  using pysaml2 version 3.0.0 breaks keystone in kilo release 2015.1.0

Status in Keystone:
  New
Status in Keystone kilo series:
  Triaged

Bug description:
  pysaml2 version 3.0.0 it's a major change as specified in [1]:
  2)All parts of the package is now collected in one module. This is a change 
that breaking change compared to earlier releases hence the major version 
change..

  when running keystone  release 2015.1.0 with python package pysaml2 version 
3.0.0 breaks it with the following error:
File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 22, 
in import_string
  return pkg_resources.EntryPoint.parse(x= + s).load(False)
File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 
2355, in load
  return self.resolve()
File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 
2361, in resolve
  module = __import__(self.module_name, fromlist=['__name__'], level=0)
File 
/usr/lib/python2.7/site-packages/keystone/contrib/federation/routers.py, line 
17, in module
  from keystone.contrib.federation import controllers
File 
/usr/lib/python2.7/site-packages/keystone/contrib/federation/controllers.py, 
line 29, in module
  from keystone.contrib.federation import idp as keystone_idp
File /usr/lib/python2.7/site-packages/keystone/contrib/federation/idp.py, 
line 29, in module
  import xmldsig
  ImportError: No module named xmldsig

  
  This is due to the new location for xmldsig module:
  xmldsig - saml2/xmldsig
  done in commit [2].

  Possible fixes are:

  1) require pysaml2 version 3.0.0
  2) cherry-pick patch from kesytone master branch with the proper fix [3]

  
  [1] - https://github.com/rohe/pysaml2/releases/tag/3.0.0
  [2] - 
https://github.com/rohe/pysaml2/commit/9af3252035484f4a8c624eba0f35b68280d43fd2
  [3] - 
https://github.com/openstack/keystone/commit/c90dd3a0f8280e28bbbff691c0ae27aff736658a

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1475796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440762] Re: Rebuild an instance with attached volume fails

2015-07-17 Thread Matt Riedemann
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Importance: Undecided = High

** Changed in: nova/kilo
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440762

Title:
  Rebuild an instance with attached volume fails

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  New
Status in openstack-ansible:
  New

Bug description:
  When trying to rebuild an instance with attached volume, it fails with
  the errors:

  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 
libvirtError: Failed to terminate process 22913 with SIGKILL: Device or 
resource busy
  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
  180Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
  182Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

  The full log of rebuild process is here:
  http://paste.openstack.org/show/166892/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475787] [NEW] Cells compat support for Juno is broken in Kilo

2015-07-17 Thread Mathieu Gagné
Public bug reported:

All calls related to services and hypervisors management are broken when
running a Kilo API cell with a Juno compute cell.

The Kilo API cell expects responses from the Juno compute cell to be
objects while in fact, their are still dict.

** Affects: nova
 Importance: Undecided
 Assignee: Mathieu Gagné (mgagne)
 Status: In Progress


** Tags: cells kilo-backport-potential

** Description changed:

- ll calls related to services and hypervisors management are broken when
+ All calls related to services and hypervisors management are broken when
  running a Kilo API cell with a Juno compute cell.
  
  The Kilo API cell expects responses from the Juno compute cell to be
  objects while in fact, their are still dict.

** Changed in: nova
 Assignee: (unassigned) = Mathieu Gagné (mgagne)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475787

Title:
  Cells compat support for Juno is broken in Kilo

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  All calls related to services and hypervisors management are broken
  when running a Kilo API cell with a Juno compute cell.

  The Kilo API cell expects responses from the Juno compute cell to be
  objects while in fact, their are still dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475796] [NEW] using pysaml2 version 3.0.0 breaks keystone in kilo release 2015.1.0

2015-07-17 Thread Marcos Simental
Public bug reported:

pysaml2 version 3.0.0 it's a major change as specified in [1]:
2)All parts of the package is now collected in one module. This is a change 
that breaking change compared to earlier releases hence the major version 
change..

when running keystone  release 2015.1.0 with python package pysaml2 version 
3.0.0 breaks it with the following error:
  File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 22, in 
import_string
return pkg_resources.EntryPoint.parse(x= + s).load(False)
  File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 2355, 
in load
return self.resolve()
  File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 2361, 
in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File 
/usr/lib/python2.7/site-packages/keystone/contrib/federation/routers.py, line 
17, in module
from keystone.contrib.federation import controllers
  File 
/usr/lib/python2.7/site-packages/keystone/contrib/federation/controllers.py, 
line 29, in module
from keystone.contrib.federation import idp as keystone_idp
  File /usr/lib/python2.7/site-packages/keystone/contrib/federation/idp.py, 
line 29, in module
import xmldsig
ImportError: No module named xmldsig


This is due to the new location for xmldsig module:
xmldsig - saml2/xmldsig
done in commit [2].

Possible fixes are:

1) require pysaml2 version 3.0.0
2) cherry-pick patch from kesytone master branch with the proper fix [3]


[1] - https://github.com/rohe/pysaml2/releases/tag/3.0.0
[2] - 
https://github.com/rohe/pysaml2/commit/9af3252035484f4a8c624eba0f35b68280d43fd2
[3] - 
https://github.com/openstack/keystone/commit/c90dd3a0f8280e28bbbff691c0ae27aff736658a

** Affects: keystone
 Importance: Undecided
 Assignee: Marcos Simental (mrkzmrkz)
 Status: New


** Tags: kilo-backport-potential

** Changed in: keystone
 Assignee: (unassigned) = Marcos Simental (mrkzmrkz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1475796

Title:
  using pysaml2 version 3.0.0 breaks keystone in kilo release 2015.1.0

Status in Keystone:
  New

Bug description:
  pysaml2 version 3.0.0 it's a major change as specified in [1]:
  2)All parts of the package is now collected in one module. This is a change 
that breaking change compared to earlier releases hence the major version 
change..

  when running keystone  release 2015.1.0 with python package pysaml2 version 
3.0.0 breaks it with the following error:
File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 22, 
in import_string
  return pkg_resources.EntryPoint.parse(x= + s).load(False)
File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 
2355, in load
  return self.resolve()
File /usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 
2361, in resolve
  module = __import__(self.module_name, fromlist=['__name__'], level=0)
File 
/usr/lib/python2.7/site-packages/keystone/contrib/federation/routers.py, line 
17, in module
  from keystone.contrib.federation import controllers
File 
/usr/lib/python2.7/site-packages/keystone/contrib/federation/controllers.py, 
line 29, in module
  from keystone.contrib.federation import idp as keystone_idp
File /usr/lib/python2.7/site-packages/keystone/contrib/federation/idp.py, 
line 29, in module
  import xmldsig
  ImportError: No module named xmldsig

  
  This is due to the new location for xmldsig module:
  xmldsig - saml2/xmldsig
  done in commit [2].

  Possible fixes are:

  1) require pysaml2 version 3.0.0
  2) cherry-pick patch from kesytone master branch with the proper fix [3]

  
  [1] - https://github.com/rohe/pysaml2/releases/tag/3.0.0
  [2] - 
https://github.com/rohe/pysaml2/commit/9af3252035484f4a8c624eba0f35b68280d43fd2
  [3] - 
https://github.com/openstack/keystone/commit/c90dd3a0f8280e28bbbff691c0ae27aff736658a

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1475796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp