[Yahoo-eng-team] [Bug 1464178] [NEW] A hostname change breaks neutron-openvswitch-agent / neutron-server tunneling updates.

2015-06-11 Thread Miguel Angel Ajo
Public bug reported:

When using tunnelling, if one of the hosts changed the hostname and
tries to sync tunnels to neutron-server, this will throw an exception
due to an unnecessary constraint, breaking the network.

Hostname changes are something neutron-server may survive to. Probably a
log warning is enough, and the old hostname endpoint should be deleted.

This was found in HA deployments with pacemaker, where the hostname is
roamed to the active node, or it's set dynamically on the nodes based on
the clone ID provided by pacemaker, that's used to allow architectures
like A/A/A/P/P for neutron, where one of the active nodes could die, and
a passive takes the resources of the old active by roaming it's hostname
(which is the logical ID where neutron agent resources are tied to).


neutron-server log:

015-06-10 05:44:48.151 24546 ERROR oslo_messaging._drivers.common 
[req-751f3392-9915-49b9-bb0b-2dec63a6649a ] Returning exception Invalid input 
for operation: (u'Tunnel IP %(ip)s in use with host %(host)s', {'ip': 
u'192.168.16.105', 'host': u'neutron-n-2'}). to caller
2015-06-10 05:44:48.152 24546 ERROR oslo_messaging._drivers.common 
[req-751f3392-9915-49b9-bb0b-2dec63a6649a ] ['Traceback (most recent call 
last):\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply\nexecutor_callback))\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch\nexecutor_callback)\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/type_tunnel.py, 
line 248, in tunnel_sync\nraise exc.InvalidInput(error_message=msg)\n', 
InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with 
host %(host)s', {'ip': u'192.168.16.105', 'host': u'neutron-n-2'}).\n]
2015-06-10 05:44:52.152 24546 ERROR oslo_messaging.rpc.dispatcher 
[req-751f3392-9915-49b9-bb0b-2dec63a6649a ] Exception during message handling: 
Invalid input for operation: (u'Tunnel IP %(ip)s in use with host %(host)s', 
{'ip': u'192.168.16.105', 'host': u'neutron-n-2'}).
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/type_tunnel.py, 
line 248, in tunnel_sync
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher raise 
exc.InvalidInput(error_message=msg)
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher InvalidInput: 
Invalid input for operation: (u'Tunnel IP %(ip)s in use with host %(host)s', 
{'ip': u'192.168.16.105', 'host': u'neutron-n-2'}).
2015-06-10 05:44:52.152 24546 TRACE oslo_messaging.rpc.dispatcher 
2015-06-10 05:44:52.152 24546 ERROR oslo_messaging._drivers.common 
[req-751f3392-9915-49b9-bb0b-2dec63a6649a ] Returning exception Invalid input 
for operation: (u'Tunnel IP %(ip)s in use with host %(host)s', {'ip': 
u'192.168.16.105', 'host': u'neutron-n-2'}). to caller
2015-06-10 05:44:52.153 24546 ERROR oslo_messaging._drivers.common 
[req-751f3392-9915-49b9-bb0b-2dec63a6649a ] ['Traceback (most recent call 
last):\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply\nexecutor_callback))\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch\nexecutor_callback)\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/type_tunnel.py, 
line 248, in tunnel_sync\nraise exc.InvalidInput(error_message=msg)\n', 
InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with 
host %(host)s', {'ip': u'192.168.16.105', 'host': u'neutron-n-2'}).\n]

How to reproduce:

1) Install a single node AIO with tunnelling for tenant networks. 
2) openstack-config --set /etc/neutron/neutron.conf DEFAULT 

[Yahoo-eng-team] [Bug 1464194] [NEW] RFE - Fujitsu Neutron ML2 Mechanism Driver for C-Fabric

2015-06-11 Thread Yushiro FURUKAWA
Public bug reported:

We'd like to implement ML2 Mechanism Driver for
FUJITSU Converged Fabric Switch(C-Fabric)[1]


Fujitsu C-Fabric enables to create port-profile and associate between
port-profile and MAC address.

  * port-profile has a VLANID
  * port-profile association shows the relation between port-profile and
MAC address of VM instance.

Problem description
===
C-Farbric requires information of OpenStack Neutron based ports
to create/delete  port-profile and port-profile association.
But, currently there is no way to inform C-Fabric to such Neutron's information.
Therefore, a new ML2 mechanism driver is necessary to post the _postcommit data
to Fujitsu C-Fabric.

  * port-create : create port-profile and port-profile association
  * port-delete : delete port-profile association and port-profile

[1]
http://jp.fujitsu.com/platform/server/primergy/peripheral/expand/cf.html

** Affects: neutron
 Importance: Undecided
 Assignee: Yushiro FURUKAWA (y-furukawa-2)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) = Yushiro FURUKAWA (y-furukawa-2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464194

Title:
  RFE - Fujitsu Neutron ML2 Mechanism Driver for C-Fabric

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We'd like to implement ML2 Mechanism Driver for
  FUJITSU Converged Fabric Switch(C-Fabric)[1]

  
  Fujitsu C-Fabric enables to create port-profile and associate between
  port-profile and MAC address.

* port-profile has a VLANID
* port-profile association shows the relation between port-profile and
  MAC address of VM instance.

  Problem description
  ===
  C-Farbric requires information of OpenStack Neutron based ports
  to create/delete  port-profile and port-profile association.
  But, currently there is no way to inform C-Fabric to such Neutron's 
information.
  Therefore, a new ML2 mechanism driver is necessary to post the _postcommit 
data
  to Fujitsu C-Fabric.

* port-create : create port-profile and port-profile association
* port-delete : delete port-profile association and port-profile

  [1]
  http://jp.fujitsu.com/platform/server/primergy/peripheral/expand/cf.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464181] [NEW] load balancing method column in missing in LB pools table

2015-06-11 Thread Masco Kaliyamoorthy
Public bug reported:

in load balancer pools table load balancing method column is missing

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464181

Title:
  load balancing method column in missing in LB pools table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  in load balancer pools table load balancing method column is missing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464222] [NEW] nova show -VM status incorrect

2015-06-11 Thread Alex Syafeyev
Public bug reported:

I have a moltinode setup ( AIO ( machine 7)+ compute(machine 9))

I rebooted noncontroller host [9]

nova list --fields OS-EXT-SRV-ATTR:host,Networks,status

+--+---+-++
| ID   | OS-EXT-SRV-ATTR: Host | 
Networks| Status |
+--+---+-++
| 768ee44d-45d4-4e09-840b-b6492b7c0526 | 7   | int_net=192.168.1.8 | ACTIVE |
| c0297bcd-c480-4400-bbb8-66b463eb9d76 | 7 | int_net=192.168.1.9 | ACTIVE |
| 285fd146-e90f-4f83-be82-d56701dfe17b | 9 | int_net=192.168.1.6 | ACTIVE |
| afc3ce06-1fda-4354-8524-880e853cecf4 | 9 | int_net=192.168.1.7 | ACTIVE |
+--+---+-++

only when the host goes up and VMs 192.168.1.6-7 go up with it we can
see the VMs in Shutdown mode.

Controller should know immediately if a host with VMs goes down.


Reproduction: 
1. configure multinode setup with VMs on both hosts . 
2. Reboot the noncontroller node. 
3. execute 
 nova list --fields OS-EXT-SRV-ATTR:host,Networks,status


more files attached in comments

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova-manage

** Attachment added: nova-manage.log
   
https://bugs.launchpad.net/bugs/1464222/+attachment/4413183/+files/nova-manage.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464222

Title:
  nova show -VM status incorrect

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a moltinode setup ( AIO ( machine 7)+ compute(machine 9))

  I rebooted noncontroller host [9]

  nova list --fields OS-EXT-SRV-ATTR:host,Networks,status

  
+--+---+-++
  | ID   | OS-EXT-SRV-ATTR: Host | 
Networks| Status |
  
+--+---+-++
  | 768ee44d-45d4-4e09-840b-b6492b7c0526 | 7   | int_net=192.168.1.8 | ACTIVE |
  | c0297bcd-c480-4400-bbb8-66b463eb9d76 | 7 | int_net=192.168.1.9 | ACTIVE |
  | 285fd146-e90f-4f83-be82-d56701dfe17b | 9 | int_net=192.168.1.6 | ACTIVE |
  | afc3ce06-1fda-4354-8524-880e853cecf4 | 9 | int_net=192.168.1.7 | ACTIVE |
  
+--+---+-++

  only when the host goes up and VMs 192.168.1.6-7 go up with it we can
  see the VMs in Shutdown mode.

  Controller should know immediately if a host with VMs goes down.

  
  Reproduction: 
  1. configure multinode setup with VMs on both hosts . 
  2. Reboot the noncontroller node. 
  3. execute 
   nova list --fields OS-EXT-SRV-ATTR:host,Networks,status

  
  more files attached in comments

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463696] Re: Agent object save method doesn't raise ValueError

2015-06-11 Thread Rajesh Tailor
** Project changed: cinder = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463696

Title:
  Agent object save method doesn't raise ValueError

Status in OpenStack Compute (Nova):
  New

Bug description:
  ValueError exception is unnecessarily caught while saving agent object in 
  nova/api/openstack/compute/contrib/agent.py module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464190] [NEW] RFE - Fujitsu Neutron ML2 Mechanism Driver for ISM

2015-06-11 Thread Yushiro FURUKAWA
Public bug reported:

We'd like to implement ML2 Mechanism Driver for
FUJITSU Software ServerView Infrastructure Manager(ISM)[1]

It calls RestAPI(formatted for ISM) from ML2 plugin of Neutron to ISM.

Fujitsu ISM enables integrated management with following contents:
  * VLAN configuration for managed switches
  * Arrangement of the rack in floor
  * Status of devices
  * Performance information
  * Maximum power consumption

Problem description
===
Fujitsu ISM requires information of OpenStack Neutron based ports and networks
to setup VLANs into the switch port of managed switch.
But, currently there is no way to inform ISM to such Neutron's information.
Therefore, a new ML2 mechanism driver is necessary to post the _postcommit data
to Fujitsu ISM.

[1] http://software.fujitsu.com/jp/serverviewism/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Summary changed:

- Fujitsu Neutron ML2 Mechanism Driver for ISM
+ RFE - Fujitsu Neutron ML2 Mechanism Driver for ISM

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464190

Title:
  RFE - Fujitsu Neutron ML2 Mechanism Driver for ISM

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We'd like to implement ML2 Mechanism Driver for
  FUJITSU Software ServerView Infrastructure Manager(ISM)[1]

  It calls RestAPI(formatted for ISM) from ML2 plugin of Neutron to ISM.

  Fujitsu ISM enables integrated management with following contents:
* VLAN configuration for managed switches
* Arrangement of the rack in floor
* Status of devices
* Performance information
* Maximum power consumption

  Problem description
  ===
  Fujitsu ISM requires information of OpenStack Neutron based ports and networks
  to setup VLANs into the switch port of managed switch.
  But, currently there is no way to inform ISM to such Neutron's information.
  Therefore, a new ML2 mechanism driver is necessary to post the _postcommit 
data
  to Fujitsu ISM.

  [1] http://software.fujitsu.com/jp/serverviewism/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463696] [NEW] Agent object save method doesn't raise ValueError

2015-06-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

ValueError exception is unnecessarily caught while saving agent object in 
nova/api/openstack/compute/contrib/agent.py module.

** Affects: nova
 Importance: Undecided
 Assignee: Rajesh Tailor (rajesh-tailor)
 Status: New

-- 
Agent object save method doesn't raise ValueError
https://bugs.launchpad.net/bugs/1463696
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464259] [NEW] EC2VolumesTest fails with shared storage backend

2015-06-11 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/02/173802/5/check/check-tempest-dsvm-full-
ceph/a72aac1/logs/screen-n-api.txt.gz?level=TRACE#_2015-06-11_09_04_19_511

2015-06-11 09:04:19.511 ERROR nova.api.ec2 
[req-0ac81d78-2717-4dd2-80e2-d94363b55ac8 EC2VolumesTest-442487008 
EC2VolumesTest-1066393631] Unexpected InvalidInput raised: Invalid input 
received: Invalid volume: Volume still has 1 dependent snapshots. (HTTP 400) 
(Request-ID: req-4586b5d2-7212-4ddd-af79-43ad8ba7ea58)
2015-06-11 09:04:19.511 ERROR nova.api.ec2 
[req-0ac81d78-2717-4dd2-80e2-d94363b55ac8 EC2VolumesTest-442487008 
EC2VolumesTest-1066393631] Environment: {HTTP_AUTHORIZATION: 
AWS4-HMAC-SHA256 
Credential=a5e9253350ce4a249ddce8b7c1c798c2/20150611/0/127/aws4_request,SignedHeaders=host;x-amz-date,Signature=304830ed947f7fba3143887b08d1e47faa18d4b59782c0992727cb7593f586b4,
 SCRIPT_NAME: , REQUEST_METHOD: POST, HTTP_X_AMZ_DATE: 
20150611T090418Z, PATH_INFO: /, SERVER_PROTOCOL: HTTP/1.0, 
CONTENT_LENGTH: 60, HTTP_USER_AGENT: Boto/2.38.0 Python/2.7.6 
Linux/3.13.0-53-generic, RAW_PATH_INFO: /, REMOTE_ADDR: 127.0.0.1, 
wsgi.url_scheme: http, SERVER_PORT: 8773, CONTENT_TYPE: 
application/x-www-form-urlencoded; charset=UTF-8, HTTP_HOST: 
127.0.0.1:8773, SERVER_NAME: 127.0.0.1, GATEWAY_INTERFACE: CGI/1.1, 
REMOTE_PORT: 45819, HTTP_ACCEPT_ENCODING: identity}

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRUMyVm9sdW1lc1Rlc3RcIiBBTkQgbWVzc2FnZTpcIlVuZXhwZWN0ZWQgSW52YWxpZElucHV0IHJhaXNlZDogSW52YWxpZCBpbnB1dCByZWNlaXZlZDogSW52YWxpZCB2b2x1bWU6IFZvbHVtZSBzdGlsbCBoYXMgMSBkZXBlbmRlbnQgc25hcHNob3RzXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzQwMzAyMTUwODd9

10 hits in 7 days, check and gate, hitting on the ceph and glusterfs
jobs.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: ceph ec2 glusterfs

** Summary changed:

- tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest fails with shared 
storage backend
+ EC2VolumesTest fails with shared storage backend

** Changed in: nova
   Status: New = Confirmed

** Tags added: ceph ec2 glusterfs

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464259

Title:
  EC2VolumesTest fails with shared storage backend

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  http://logs.openstack.org/02/173802/5/check/check-tempest-dsvm-full-
  ceph/a72aac1/logs/screen-n-api.txt.gz?level=TRACE#_2015-06-11_09_04_19_511

  2015-06-11 09:04:19.511 ERROR nova.api.ec2 
[req-0ac81d78-2717-4dd2-80e2-d94363b55ac8 EC2VolumesTest-442487008 
EC2VolumesTest-1066393631] Unexpected InvalidInput raised: Invalid input 
received: Invalid volume: Volume still has 1 dependent snapshots. (HTTP 400) 
(Request-ID: req-4586b5d2-7212-4ddd-af79-43ad8ba7ea58)
  2015-06-11 09:04:19.511 ERROR nova.api.ec2 
[req-0ac81d78-2717-4dd2-80e2-d94363b55ac8 EC2VolumesTest-442487008 
EC2VolumesTest-1066393631] Environment: {HTTP_AUTHORIZATION: 
AWS4-HMAC-SHA256 
Credential=a5e9253350ce4a249ddce8b7c1c798c2/20150611/0/127/aws4_request,SignedHeaders=host;x-amz-date,Signature=304830ed947f7fba3143887b08d1e47faa18d4b59782c0992727cb7593f586b4,
 SCRIPT_NAME: , REQUEST_METHOD: POST, HTTP_X_AMZ_DATE: 
20150611T090418Z, PATH_INFO: /, SERVER_PROTOCOL: HTTP/1.0, 
CONTENT_LENGTH: 60, HTTP_USER_AGENT: Boto/2.38.0 Python/2.7.6 
Linux/3.13.0-53-generic, RAW_PATH_INFO: /, REMOTE_ADDR: 127.0.0.1, 
wsgi.url_scheme: http, SERVER_PORT: 8773, CONTENT_TYPE: 
application/x-www-form-urlencoded; charset=UTF-8, HTTP_HOST: 
127.0.0.1:8773, SERVER_NAME: 127.0.0.1, GATEWAY_INTERFACE: CGI/1.1, 
REMOTE_PORT: 45819, HTTP_ACCEPT_ENCODING: identity}

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRUMyVm9sdW1lc1Rlc3RcIiBBTkQgbWVzc2FnZTpcIlVuZXhwZWN0ZWQgSW52YWxpZElucHV0IHJhaXNlZDogSW52YWxpZCBpbnB1dCByZWNlaXZlZDogSW52YWxpZCB2b2x1bWU6IFZvbHVtZSBzdGlsbCBoYXMgMSBkZXBlbmRlbnQgc25hcHNob3RzXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzQwMzAyMTUwODd9

  10 hits in 7 days, check and gate, hitting on the ceph and glusterfs
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1464259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464259] Re: EC2VolumesTest fails with shared storage backend

2015-06-11 Thread Matt Riedemann
The glusterfs failure is actually not the issue here, that was a check
queue change where everything failed, so this is just the ceph issue.

** No longer affects: nova

** Summary changed:

- EC2VolumesTest fails with shared storage backend
+ EC2VolumesTest fails with rbd backend

** Changed in: cinder
   Status: New = Triaged

** Changed in: cinder
   Importance: Undecided = Low

** Tags removed: ec2 glusterfs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464259

Title:
  EC2VolumesTest fails with rbd backend

Status in Cinder:
  Triaged

Bug description:
  http://logs.openstack.org/02/173802/5/check/check-tempest-dsvm-full-
  ceph/a72aac1/logs/screen-n-api.txt.gz?level=TRACE#_2015-06-11_09_04_19_511

  2015-06-11 09:04:19.511 ERROR nova.api.ec2 
[req-0ac81d78-2717-4dd2-80e2-d94363b55ac8 EC2VolumesTest-442487008 
EC2VolumesTest-1066393631] Unexpected InvalidInput raised: Invalid input 
received: Invalid volume: Volume still has 1 dependent snapshots. (HTTP 400) 
(Request-ID: req-4586b5d2-7212-4ddd-af79-43ad8ba7ea58)
  2015-06-11 09:04:19.511 ERROR nova.api.ec2 
[req-0ac81d78-2717-4dd2-80e2-d94363b55ac8 EC2VolumesTest-442487008 
EC2VolumesTest-1066393631] Environment: {HTTP_AUTHORIZATION: 
AWS4-HMAC-SHA256 
Credential=a5e9253350ce4a249ddce8b7c1c798c2/20150611/0/127/aws4_request,SignedHeaders=host;x-amz-date,Signature=304830ed947f7fba3143887b08d1e47faa18d4b59782c0992727cb7593f586b4,
 SCRIPT_NAME: , REQUEST_METHOD: POST, HTTP_X_AMZ_DATE: 
20150611T090418Z, PATH_INFO: /, SERVER_PROTOCOL: HTTP/1.0, 
CONTENT_LENGTH: 60, HTTP_USER_AGENT: Boto/2.38.0 Python/2.7.6 
Linux/3.13.0-53-generic, RAW_PATH_INFO: /, REMOTE_ADDR: 127.0.0.1, 
wsgi.url_scheme: http, SERVER_PORT: 8773, CONTENT_TYPE: 
application/x-www-form-urlencoded; charset=UTF-8, HTTP_HOST: 
127.0.0.1:8773, SERVER_NAME: 127.0.0.1, GATEWAY_INTERFACE: CGI/1.1, 
REMOTE_PORT: 45819, HTTP_ACCEPT_ENCODING: identity}

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRUMyVm9sdW1lc1Rlc3RcIiBBTkQgbWVzc2FnZTpcIlVuZXhwZWN0ZWQgSW52YWxpZElucHV0IHJhaXNlZDogSW52YWxpZCBpbnB1dCByZWNlaXZlZDogSW52YWxpZCB2b2x1bWU6IFZvbHVtZSBzdGlsbCBoYXMgMSBkZXBlbmRlbnQgc25hcHNob3RzXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzQwMzAyMTUwODd9

  10 hits in 7 days, check and gate, hitting on the ceph and glusterfs
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1464259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464286] [NEW] NumaTopololgyFilter Not behaving as expected (returns 0 hosts)

2015-06-11 Thread Dave Johnston
Public bug reported:

I have a system with 32 cores (2 sockets, 8 cores, hyperthreading enabled).
The NUMA topology as follows:

numactl --hardware

available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 65501 MB
node 0 free: 38562 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 65535 MB
node 1 free: 63846 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

I have defined an flavor in Openstack with 12 vcpus as follows:
nova flavor-show c4.3xlarge
++--+
| Property   | Value
|
++--+
| OS-FLV-DISABLED:disabled   | False
|
| OS-FLV-EXT-DATA:ephemeral  | 0
|
| disk   | 40   
|
| extra_specs| {hw:cpu_policy: dedicated, hw:numa_nodes: 
1} |
| id | 1d76a225-90c1-4f6f-a59b-000795c33e63 
|
| name   | c4.3xlarge   
|
| os-flavor-access:is_public | True 
|
| ram| 24576
|
| rxtx_factor| 1.0  
|
| swap   | 8192 
|
| vcpus  | 12   
|
++--+

I expect to be able to launch two instances of this flavor on the 32
core host, one contained within each NUMA node.

When I launch two instances, the first succeeds, but the second fails.
The instance xml is attached, along with the system capabilities.

If I change hw:numa_nodes = 2, then I can launch two copies of the
instance.

N.B for the purposes of testing I have disabled all vcpu_pin and isolcpu
settings.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: Virsh capabilities and instance xml
   
https://bugs.launchpad.net/bugs/1464286/+attachment/4413249/+files/instance_and_virsh_data.xml

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464286

Title:
  NumaTopololgyFilter Not behaving as expected (returns 0 hosts)

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a system with 32 cores (2 sockets, 8 cores, hyperthreading enabled).
  The NUMA topology as follows:

  numactl --hardware

  available: 2 nodes (0-1)
  node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
  node 0 size: 65501 MB
  node 0 free: 38562 MB
  node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
  node 1 size: 65535 MB
  node 1 free: 63846 MB
  node distances:
  node   0   1 
0:  10  20 
1:  20  10 

  I have defined an flavor in Openstack with 12 vcpus as follows:
  nova flavor-show c4.3xlarge
  
++--+
  | Property   | Value  
  |
  
++--+
  | OS-FLV-DISABLED:disabled   | False  
  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  
  |
  | disk   | 40 
  |
  | extra_specs| {hw:cpu_policy: dedicated, 
hw:numa_nodes: 1} |
  | id | 1d76a225-90c1-4f6f-a59b-000795c33e63   
  |
  | name   | c4.3xlarge 
  |
  | os-flavor-access:is_public | True   
  |
  | ram| 24576  
  |
  | rxtx_factor| 1.0
  |
  | swap   | 8192   
  |
  | vcpus  | 12 
  |
  
++--+

  I expect to be able to launch two instances of this flavor on the 32
  core host, one contained within each NUMA node.

  When I launch two instances, the first succeeds, but the second fails.
  The instance xml is attached, along with the system capabilities.

  If I change hw:numa_nodes = 2, then I can launch two copies of the
  instance.

  N.B for the 

[Yahoo-eng-team] [Bug 1461055] Re: Can't delete instance stuck in deleting task

2015-06-11 Thread Sylvain Bauza
Sorry, hitted tab too fast.

So, was saying that in general, we recommend to run 'nova reset-state
vm' to set the state to the desired option and then delete it again.

Could you please try it ?

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461055

Title:
  Can't delete instance stuck in deleting task

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Description of problem:  On a Juno HA deployment, nova over shared nfs
  storage, when I deleted an instance it was deleted:

  2015-06-02 11:57:36.273 3505 INFO nova.virt.libvirt.driver [req-
  4cc54412-a449-4c7a-bbe1-b21d202bcfe7 None] [instance:
  7b6c8ad5-7633-4d53-9f84-93b12a701cd3] Deletion of
  /var/lib/nova/instances/7b6c8ad5-7633-4d53-9f84-93b12a701cd3_del
  complete

  Also instance wasn't found with virsh list all. 
  Yet nova list and Horizon both still show this instance as stuck in task 
deleting, two hours+ pasted since I deleted it. 

  Version-Release number of selected component (if applicable):
  rhel 7.1
  python-nova-2014.2.2-19.el7ost.noarch
  openstack-nova-compute-2014.2.2-19.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  openstack-nova-common-2014.2.2-19.el7ost.noarch

  How reproducible:
  Unsure, it doesn't happen with every instance deletion, but happened more 
than this one time. 

  Steps to Reproduce:
  1. Boot an instance
  2. Delete instance 
  3. Instance is stuck in deleting task on nova/Horozon. 

  Actual results:
  Stuck with a phantom deleting instance, which is basically already dead 
from Virsh's point of view. 

  Expected results:
  Instance should get deleted including from nova list/Horizon. 

  Additional info:

  
  Workaround doing openstack-service restart for nova on compute node fixed my 
problem. Instance is totally gone from Nova/Horizon. 

  instance virsh id instance-0d4d.log
  instanceID  7b6c8ad5-7633-4d53-9f84-93b12a701cd3

  | OS-EXT-STS:power_state   | 1
  |
  | OS-EXT-STS:task_state| deleting 
  |
  | OS-EXT-STS:vm_state  | deleted  
  |
  | OS-SRV-USG:launched_at   | 2015-05-28T11:06:33.00   
  |
  | OS-SRV-USG:terminated_at | 2015-06-02T08:57:37.00   
  |
  .. |
  | status   | DELETED

  Attached nova log from compute and controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318104] Re: dhcp isolation via iptables does not work

2015-06-11 Thread Alan Pevec
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = In Progress

** Changed in: nova/icehouse
   Importance: Undecided = Medium

** Changed in: nova/icehouse
 Assignee: (unassigned) = Brent Eagles (beagles)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1318104

Title:
  dhcp isolation via iptables does not work

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  In Progress

Bug description:
  Attempting to block iptables across the bridge via iptables rules is
  not working. The iptables rules are never hit. blocking dhcp traffic
  from exiting the node will need to use ebtables instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1318104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1028684] Re: Missing body parameters in volume create function

2015-06-11 Thread Tom Fifield
** Changed in: python-cinderclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1028684

Title:
  Missing body parameters in volume create function

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Cinder:
  Fix Released

Bug description:
  When trying to run tempest against cinder, discovered that provided
  metadata was not being used in volume creation.  Along with this some
  other parameters should be there as well including; user_id,
  project_id, availability_zone and type_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1028684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287542] Re: Error importing module nova.openstack.common.sslutils: duplicate option: ca_file

2015-06-11 Thread Tom Fifield
** Changed in: blazar
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287542

Title:
  Error importing module nova.openstack.common.sslutils: duplicate
  option: ca_file

Status in Blazar:
  Fix Released
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Error importing module nova.openstack.common.sslutils: duplicate
  option: ca_file

  This is seen in the nova gate - for unrelated patches - it might be a
  bad slave I guess, or it might be happening to all  subsequent
  patches, or it might be a WTF.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkVycm9yIGltcG9ydGluZyBtb2R1bGUgbm92YS5vcGVuc3RhY2suY29tbW9uLnNzbHV0aWxzOiBkdXBsaWNhdGUgb3B0aW9uOiBjYV9maWxlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjkwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTM5MTUyNTE4ODl9
  suggest it has only happened once so far.

  commit 5188052937219badaa692f67d9f98623c15d1de2
  Merge: af626d0 88b7380
  Author: Jenkins jenk...@review.openstack.org
  Date:   Tue Mar 4 02:47:02 2014 +

  Merge Sync latest config file generator from oslo-incubator

  Was the latest merge prior to this, but it may be coincidental.

To manage notifications about this bug go to:
https://bugs.launchpad.net/blazar/+bug/1287542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464348] [NEW] hostname comparison for foced_host should be case-insensitive

2015-06-11 Thread rick jones
Public bug reported:

It would appear that when the scheduler is matching the hostname
provided in the --availability-zone option with the hostnames of the
compute nodes in a case-sensitive manner.  However, going back to the
misty Dawn of Internet Time hostnames have been case-insensitive items.
As such, the scheduler should be making the comparisons on a case-
insensitive basis.

$ /opt/stack/venvs/nova/bin/nova-scheduler --version
2014.2.3

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464348

Title:
  hostname comparison for foced_host should be case-insensitive

Status in OpenStack Compute (Nova):
  New

Bug description:
  It would appear that when the scheduler is matching the hostname
  provided in the --availability-zone option with the hostnames of the
  compute nodes in a case-sensitive manner.  However, going back to the
  misty Dawn of Internet Time hostnames have been case-insensitive
  items.  As such, the scheduler should be making the comparisons on a
  case-insensitive basis.

  $ /opt/stack/venvs/nova/bin/nova-scheduler --version
  2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462424] Re: VMware: stable icehouse unable to spawn VM

2015-06-11 Thread Alan Pevec
 Unable to boot VM due to patch
https://github.com/openstack/nova/commit/539d632fdea1696dc74fd2fb05921466f804e19e

But this is Havana commit?
How was this fixed in = Juno ?

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = In Progress

** Changed in: nova/icehouse
   Importance: Undecided = High

** Changed in: nova
   Status: Confirmed = Invalid

** Changed in: nova/icehouse
 Assignee: (unassigned) = Gary Kotton (garyk)

** Changed in: nova/icehouse
Milestone: None = 2014.1.5

** Tags removed: icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462424

Title:
  VMware: stable icehouse unable to spawn VM

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) icehouse series:
  In Progress

Bug description:
  Unable to boot VM due to patch
  
https://github.com/openstack/nova/commit/539d632fdea1696dc74fd2fb05921466f804e19e

  This is with VC 6.

  The reason is:
  nova-scheduler.log.1:2015-06-02 16:01:49.280 1174 ERROR 
nova.scheduler.filter_scheduler [req-18c26579-09e7-4287-b401-27ac3505e7c3 
bf28f7d47bf348d6ab6bcf31f0f96c92 04ad461fb68d4b80b2911b3fe0f6b1f9] [instance: 
5b3cca48-a295-4aa0-9176-798c174aeb3f] Error from last host: icehouse (node 
domain-c9(compute)): [u'Traceback (most recent call last):\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1379, in 
_build_instance\nset_access_ip=set_access_ip)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 410, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1797, in 
_spawn\nLOG.exception(_(\'Instance failed to spawn\'), 
instance=instance)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
/usr/lib/python2.7/dist-pa
 ckages/nova/compute/manager.py, line 1794, in _spawn\n
block_device_info)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py, line 629, in 
spawn\nadmin_password, network_info, block_device_info)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py, line 689, in 
spawn\n_power_on_vm()\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py, line 685, in 
_power_on_vm\nself._session._wait_for_task(power_on_task)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py, line 966, in 
_wait_for_task\nret_val = done.wait()\n', u'  File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait\n
return hubs.get_hub().switch()\n', u'  File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch\n  
  return self.greenlet.switch()\n', uAttributeError: TaskInfo instance has no 
attribute 'name'\n]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464354] [NEW] Angular table paging buttons looks clickable

2015-06-11 Thread Thai Tran
Public bug reported:

Angular table paging buttons should look like they are clickable. When
users hover over it, it should show a pointer instead of a caret.

** Affects: horizon
 Importance: Low
 Status: New


** Tags: low-hanging-fruit

** Description changed:

- Angular table paging buttons should look like they are clickable.
- Currently they are not.
+ Angular table paging buttons should look like they are clickable. When
+ users hover over it, it should show a pointer instead of a caret.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464354

Title:
  Angular table paging buttons looks clickable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Angular table paging buttons should look like they are clickable. When
  users hover over it, it should show a pointer instead of a caret.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-06-11 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New = Confirmed

** Changed in: cloud-archive
 Assignee: (unassigned) = Jorge Niedbalski (niedbalski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in Ubuntu Cloud Archive:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  Fix Committed
Status in glance package in Ubuntu:
  Confirmed

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. 123 and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: glance --os-image-api-version 2 
image-list
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464361] [NEW] Support for multiple gateways in Neutron subnets in provider networks

2015-06-11 Thread Shraddha Pandhe
Public bug reported:

Currently, the subnets in Neutron only support one gateway. For provider
networks in large data centers, quite often, the architecture is such a
way that multiple gateways are configured for the subnets. These
multiple gateways are typically spread across backplanes  so that the
production traffic can be load-balanced between backplanes.

This is just my use case for supporting multiple gateways, but other
folks might have more use cases as well.

I want to open up a discussion on this topic and figure out the best way
to handle this. Should this be done in a same way as dns-nameserver,
with a separate table with two columns:  gateway_ip, subnet_id.

** Affects: neutron
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464361

Title:
  Support for multiple gateways in Neutron subnets in provider networks

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, the subnets in Neutron only support one gateway. For
  provider networks in large data centers, quite often, the architecture
  is such a way that multiple gateways are configured for the subnets.
  These multiple gateways are typically spread across backplanes  so
  that the production traffic can be load-balanced between backplanes.

  This is just my use case for supporting multiple gateways, but other
  folks might have more use cases as well.

  I want to open up a discussion on this topic and figure out the best
  way to handle this. Should this be done in a same way as dns-
  nameserver, with a separate table with two columns:  gateway_ip,
  subnet_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464366] [NEW] unit tests fail based on wall clock time

2015-06-11 Thread Brant Knudson
Public bug reported:


We've got a lot of tests that depend on how long the test takes to run. Tests 
can take a long time just because you have a slow or overloaded system, or 
maybe you're trying to step through it with the debugger.

The tests that fail generally don't care about the time and aren't
attempting to verify performance, but still require that the test run
quickly enough.

Tests shouldn't depend on the wall clock time, just like they shouldn't
depend on any external factors.

Here's an example of a failing test:

keystone.tests.unit.test_auth.AuthWithRemoteUser.test_scoped_remote_authn
-

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_auth.py, line 741, in 
test_scoped_remote_authn
enforce_audit_ids=False)
  File keystone/tests/unit/test_auth.py, line 104, in assertEqualTokens
timeutils.parse_isotime(b['access']['token']['expires']))
  File keystone/tests/unit/core.py, line 521, in 
assertCloseEnoughForGovernmentWork
self.assertTrue(abs(a - b).seconds = delta, msg)
  File 
/home/jenkins/workspace/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py,
 line 678, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : 2015-06-11 13:34:46+00:00 != 2015-06-11 
13:34:50+00:00 within 3 delta

It took 4 seconds rather than 3.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1464366

Title:
  unit tests fail based on wall clock time

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  We've got a lot of tests that depend on how long the test takes to run. Tests 
can take a long time just because you have a slow or overloaded system, or 
maybe you're trying to step through it with the debugger.

  The tests that fail generally don't care about the time and aren't
  attempting to verify performance, but still require that the test run
  quickly enough.

  Tests shouldn't depend on the wall clock time, just like they
  shouldn't depend on any external factors.

  Here's an example of a failing test:

  keystone.tests.unit.test_auth.AuthWithRemoteUser.test_scoped_remote_authn
  -

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File keystone/tests/unit/test_auth.py, line 741, in 
test_scoped_remote_authn
  enforce_audit_ids=False)
File keystone/tests/unit/test_auth.py, line 104, in assertEqualTokens
  timeutils.parse_isotime(b['access']['token']['expires']))
File keystone/tests/unit/core.py, line 521, in 
assertCloseEnoughForGovernmentWork
  self.assertTrue(abs(a - b).seconds = delta, msg)
File 
/home/jenkins/workspace/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py,
 line 678, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : 2015-06-11 13:34:46+00:00 != 
2015-06-11 13:34:50+00:00 within 3 delta

  It took 4 seconds rather than 3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1464366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464381] [NEW] can't get instances from different tenants even if policy.json is set properly for that

2015-06-11 Thread Slawek Kaplonski
Public bug reported:

As was said in http://lists.openstack.org/pipermail/openstack-
operators/2015-June/007354.html even if policy.json is set to allow some
user with special role to see instances from different tenant this
settings are ignored and admin context is required.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464381

Title:
  can't get instances from different tenants even if policy.json is set
  properly for that

Status in OpenStack Compute (Nova):
  New

Bug description:
  As was said in http://lists.openstack.org/pipermail/openstack-
  operators/2015-June/007354.html even if policy.json is set to allow
  some user with special role to see instances from different tenant
  this settings are ignored and admin context is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462992] Re: Fix all Bad indentation pylint issues

2015-06-11 Thread Rajesh Tailor
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Rajesh Tailor (rajesh-tailor)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462992

Title:
  Fix all Bad indentation pylint issues

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Look for W0311 pylint issues.

  steps to list W0311 pylint issues:
  (1) Run below command:
  pylint --rcfile=.pylintrc -f parseable -i yes cinder/ | grep '\[W0311'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1462992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464238] [NEW] L3 agent: cleanup is not performed for unlnown routers

2015-06-11 Thread Oleg Bondarev
Public bug reported:

On router deletion l3 agent first looks for router in local dynamic storage and 
cleans up router's namespace and metadata proxy process only for known routers. 
In some cases that might be a problem. One of such is when l3 agent is 
restarted:
 - fullsync was initiated, agent adds update events for all routers returned by 
server to the processing queue
 - user deletes some router(s)
 - router_deleted event with higher priority is processed by the agent - 
nothing is cleaned up because router is yet unknown to the agent
 - no more events are processed for the router because router_deleted event had 
latest timestamp
 - namespace and metadata proxy process of deleted router(s) stay on the 
network node till next agent resync/restart

The proposal would be to perform cleanup for unknown routers as well.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464238

Title:
  L3 agent: cleanup is not performed for unlnown routers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On router deletion l3 agent first looks for router in local dynamic storage 
and cleans up router's namespace and metadata proxy process only for known 
routers. In some cases that might be a problem. One of such is when l3 agent is 
restarted:
   - fullsync was initiated, agent adds update events for all routers returned 
by server to the processing queue
   - user deletes some router(s)
   - router_deleted event with higher priority is processed by the agent - 
nothing is cleaned up because router is yet unknown to the agent
   - no more events are processed for the router because router_deleted event 
had latest timestamp
   - namespace and metadata proxy process of deleted router(s) stay on the 
network node till next agent resync/restart

  The proposal would be to perform cleanup for unknown routers as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464239] [NEW] mount: special device /dev/sdb does not exist

2015-06-11 Thread Dan Prince
Public bug reported:

As of today it looks like all jobs fail due to a missing Ephemeral
partition:

mount: special device /dev/sdb does not exist




This Nova commit looks suspicious: 7f8128f87f5a2fa93c857295fb7e4163986eda25
Add the swap and ephemeral BDMs if needed

** Affects: nova
 Importance: Undecided
 Assignee: Dan Prince (dan-prince)
 Status: In Progress

** Affects: tripleo
 Importance: Critical
 Assignee: Dan Prince (dan-prince)
 Status: In Progress

** Changed in: tripleo
 Assignee: (unassigned) = Dan Prince (dan-prince)

** Changed in: tripleo
   Status: New = In Progress

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Dan Prince (dan-prince)

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: nova
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464239

Title:
  mount: special device /dev/sdb does not exist

Status in OpenStack Compute (Nova):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  As of today it looks like all jobs fail due to a missing Ephemeral
  partition:

  mount: special device /dev/sdb does not exist

  
  

  This Nova commit looks suspicious: 7f8128f87f5a2fa93c857295fb7e4163986eda25
  Add the swap and ephemeral BDMs if needed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464229] [NEW] LbaasV2 Health monitor status

2015-06-11 Thread Alex Syafeyev
Public bug reported:

lbaasv2 healmonnitor:

We have no way to see if an LbaasV2 health monitor is succesfful or failed.
Additionally, we have no way to see if a VM in lbaasv2 pool is up or down ( 
from an Lbaasv2 point of view) 

neutron lbaas-pool-show - should show HealtMonitor status for VMs.

kilo
rhel7.1
python-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
openstack-neutron-common-2015.1.0-1.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-fwaas-2015.1.0-3.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464229

Title:
  LbaasV2 Health monitor status

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  lbaasv2 healmonnitor:

  We have no way to see if an LbaasV2 health monitor is succesfful or failed.
  Additionally, we have no way to see if a VM in lbaasv2 pool is up or down ( 
from an Lbaasv2 point of view) 

  neutron lbaas-pool-show - should show HealtMonitor status for VMs.

  kilo
  rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464241] [NEW] Lbaasv2 command logs not seen

2015-06-11 Thread Alex Syafeyev
Public bug reported:

I am testing incorrect and correct lbaasv2 deletion. 
even if a command fails we do not see it in the  
/var/log/neutron/lbaasv2-agent.log

BUT 
We see the lbaas (not lbaasv2) is being updated with information and has error. 

2015-06-11 03:03:34.352 21274 WARNING neutron.openstack.common.loopingcall [-] 
task bound method LbaasAgentManager.run_periodic_tasks of 
neutron_lbaas.services.loadbalancer.agent.agent_manager.LbaasAgentManager 
object at 0x274dfd0 run outlasted interval by 50.10 sec
2015-06-11 03:04:34.366 21274 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py,
 line 152, in sync_state
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_ready_devices())
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_api.py,
 line 36, in get_ready_devices
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_devices', host=self.host)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
339, in _send
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
243, in wait
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=timeout)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
149, in get
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply to message ID 73130a6bb5444f259dbf810cfb1003b3
2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager


configure lbaasv2 setup- loadbalncer, listener, member, pool, healthmonitor. 

see lbaasv2 logs and lbaas logs
 /var/log/neutron/lbaasv2-agent.log
 /var/log/neutron/lbaasv-agent.log


lbaasv2
kilo
rhel7.1 
openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464241

Title:
  Lbaasv2 command logs not seen

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am testing incorrect and correct lbaasv2 deletion. 
  even if a command fails we do not see it in the  
/var/log/neutron/lbaasv2-agent.log

  BUT 
  We see the lbaas (not lbaasv2) is being updated with information and has 
error. 

  2015-06-11 03:03:34.352 21274 WARNING neutron.openstack.common.loopingcall 
[-] task bound method LbaasAgentManager.run_periodic_tasks of 
neutron_lbaas.services.loadbalancer.agent.agent_manager.LbaasAgentManager 
object at 0x274dfd0 run outlasted interval by 50.10 sec
  2015-06-11 03:04:34.366 21274 ERROR 

[Yahoo-eng-team] [Bug 1464230] [NEW] lbaasv2 Health monitor limitation

2015-06-11 Thread Alex Syafeyev
Public bug reported:

We can not configure second health monitor on and openstack. 
 
We can not create lbaasv2 health monitor without attach it to pool
 [ neutron lbaas-healthmonitor-create --delay 3 --max-retries 3 --type HTTPS 
--timeout 9 
   neutron lbaas-healthmonitor-create: error: argument --pool is required]

We can not assign health monitor to pool with other healthmonitor configure 
[ neutron lbaas-healthmonitor-create --delay 3 --max-retries 3 --type HTTPS 
--timeout 9 --pool 10240065-efc0-4390-abd8-28266ccbaa37
Only one health monitor per pool allowed.  Pool 
10240065-efc0-4390-abd8-28266ccbaa37 is already using Health Monitor 
972914b1-c670-4d4b-aaef-ec2c41568ecb]

We can not change healthmonitor for a pool because we cannot create one

We can not create additional health monitor without creating an unneeded
pool.


Kilo 
rhel7.1
python-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-2015.1.0-1.el7ost.noarch
openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
openstack-neutron-common-2015.1.0-1.el7ost.noarch
python-neutron-lbaas-2015.1.0-3.el7ost.noarch
python-neutron-fwaas-2015.1.0-3.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464230

Title:
  lbaasv2 Health monitor limitation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We can not configure second health monitor on and openstack. 
   
  We can not create lbaasv2 health monitor without attach it to pool
   [ neutron lbaas-healthmonitor-create --delay 3 --max-retries 3 --type HTTPS 
--timeout 9 
 neutron lbaas-healthmonitor-create: error: argument --pool is required]

  We can not assign health monitor to pool with other healthmonitor configure 
  [ neutron lbaas-healthmonitor-create --delay 3 --max-retries 3 --type HTTPS 
--timeout 9 --pool 10240065-efc0-4390-abd8-28266ccbaa37
  Only one health monitor per pool allowed.  Pool 
10240065-efc0-4390-abd8-28266ccbaa37 is already using Health Monitor 
972914b1-c670-4d4b-aaef-ec2c41568ecb]

  We can not change healthmonitor for a pool because we cannot create
  one

  We can not create additional health monitor without creating an
  unneeded pool.

  
  Kilo 
  rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461433] Re: Automatically generated admin password is not complex enough

2015-06-11 Thread Tristan Cacqueray
This is a class D type of bug ( https://security.openstack.org/vmt-
process.html#incident-report-taxonomy ).

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461433

Title:
  Automatically generated admin password is not complex enough

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  When performing actions such as create instances, evacuate instances,
  rebuild instances, rescue instances and update instances' admin
  password. When the user dose not provide admin password,
  generate_password() in utils.py is used to generate an admin password.
  Generate_password() now uses two password symbol groups: default and
  easier, the default symbol group contains numbers, upper case letters
  and small case letters. the easier symbol group contains only numbers
  and upper case letters.  The generated password is not complex enough
  and can cause security problems.

  One possible solution is to add a new symbol group:
  STRONGER_PASSWORD_SYMBOLS which contains numbers, upper case letters,
  lower case letters and also special characters such as
  `~!@#$%^*()-_=+ and space. Then adding a new option in configuration
  file: generate_strong_password = True, when this option is set, nova
  will generate password using STRONGER_PASSWORD_SYMBOLS symbol group
  and with longer password length. If this option is not set, the
  password will be generated using the default symbol group and default
  length.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464394] [NEW] Improve OVS agent init method

2015-06-11 Thread Gal Sagie
Public bug reported:

Currently the agent init method takes the configuration object cfg.CONF
but also there is a method which creates specific configurations
dictionary according to it and call the agent with that structure.

One method should be picked and code needs to be aligned as this is
causing confusion and makes it hard to track

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Gal Sagie (gal-sagie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464394

Title:
  Improve OVS agent init method

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently the agent init method takes the configuration object
  cfg.CONF but also there is a method which creates specific
  configurations dictionary according to it and call the agent with that
  structure.

  One method should be picked and code needs to be aligned as this is
  causing confusion and makes it hard to track

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464387] [NEW] VPNaaS: Provide local tunnel IP for service

2015-06-11 Thread Paul Michali
Public bug reported:

To support the case where the VPN service is provided outside of the
Neutron router (appliance, VM, separate S/W, H/W, etc), the following
changes should be made:

- Add a field to VPN service table for the local tunnel IP
- When the VPN service GET REST API is called, return the IP address
- When creating a VPN service, populate the database field with the GW IP of 
the Neutron router (for current implementations).
- Update UTs for this functionality.

This IP information could be used by orchestration tools when setting up
two ends of a VPN IPSec connection.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464387

Title:
  VPNaaS: Provide local tunnel IP for service

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  To support the case where the VPN service is provided outside of the
  Neutron router (appliance, VM, separate S/W, H/W, etc), the following
  changes should be made:

  - Add a field to VPN service table for the local tunnel IP
  - When the VPN service GET REST API is called, return the IP address
  - When creating a VPN service, populate the database field with the GW IP of 
the Neutron router (for current implementations).
  - Update UTs for this functionality.

  This IP information could be used by orchestration tools when setting
  up two ends of a VPN IPSec connection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461431] Re: Enable admin password complexity verification

2015-06-11 Thread Tristan Cacqueray
Agreed on class D type of bug.

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461431

Title:
  Enable admin password complexity verification

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  When performing actions such as create instances, evacuate instances,
  rebuild instances, rescue instances and update instances' admin
  password. The complexity of user provided admin password has not been
  verified. This can cause security problems.

  One solution will be adding a configuration option:
  using_complex_admin_password = True, if this option is set in
  configure file by administrator, then Nova will perform password
  complexity checks, the check standards can be set to following the IT
  industry general standard, if the provided admin password is not
  complex enough, an exception will be throw. If this option is not set
  in configure file, then the complexity check will be skipped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464424] [NEW] Separate openstack_dashboard jasmine tests

2015-06-11 Thread Tyr Johanson
Public bug reported:

We are working on making horizon/horizon usable and testable separately
from horizon/openstack_dashboard (and vice versa).

Currently, code that is brought in using the
horizon/openstack_dashboard/enabled/* files can be tested separately
from horizon, however once the API files move from horizon to
openstack_dashboard, we will have code common to multiple dashboards,
but not included by any of the enabled/* files.

This will likely require that openstack_dashboard has its own version of
jasmine_tests.py (similar to the horizon/horizon version)

There are two patches related to this bug:
https://review.openstack.org/#/c/184543 - which isolates the API files into 
/horizon/horizon/static/openstack_dashboard_apis
https://review.openstack.org/#/c/186295/ - which moves the django templates 
used for openstack_dashboard

Finally, the API files themselves will need to move into
/horizon/openstack_dashboard, but will require this bug to be fixed to
enable that move. Otherwise, we have nowhere to include the API test
files (unlike other tests that can be brought in by a specific dashboard
using and /enabled/* file)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464424

Title:
  Separate openstack_dashboard jasmine tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We are working on making horizon/horizon usable and testable
  separately from horizon/openstack_dashboard (and vice versa).

  Currently, code that is brought in using the
  horizon/openstack_dashboard/enabled/* files can be tested separately
  from horizon, however once the API files move from horizon to
  openstack_dashboard, we will have code common to multiple dashboards,
  but not included by any of the enabled/* files.

  This will likely require that openstack_dashboard has its own version
  of jasmine_tests.py (similar to the horizon/horizon version)

  There are two patches related to this bug:
  https://review.openstack.org/#/c/184543 - which isolates the API files into 
/horizon/horizon/static/openstack_dashboard_apis
  https://review.openstack.org/#/c/186295/ - which moves the django templates 
used for openstack_dashboard

  Finally, the API files themselves will need to move into
  /horizon/openstack_dashboard, but will require this bug to be fixed to
  enable that move. Otherwise, we have nowhere to include the API test
  files (unlike other tests that can be brought in by a specific
  dashboard using and /enabled/* file)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441062] Re: sort the attribute in __all__ as the import sequence in the api/__init__.py

2015-06-11 Thread tinytmy
** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441062

Title:
  sort the attribute in __all__ as the import sequence in the
  api/__init__.py

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the api/__init__.py the import sequence is by alphabet,
  
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/__init__.py#L34-L48)
  I think the attribute in __all__ can sort as the import sequence is much 
better.
  
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/__init__.py#L51-L67)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455102] Re: some test jobs broken by tox 2.0 not passing env variables

2015-06-11 Thread Thierry Carrez
** Changed in: python-glanceclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455102

Title:
  some test jobs broken by tox 2.0 not passing env variables

Status in OpenStack Magnum:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Gate Infrastructure:
  Confirmed
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  Fix Released
Status in python-manilaclient:
  Fix Committed
Status in Python client library for Neutron:
  Fix Committed
Status in Python client library for Nova:
  In Progress
Status in Python client library for Swift:
  In Progress
Status in OpenStack Object Storage (Swift):
  In Progress

Bug description:
  Tox 2.0 brings environment isolation, which is good. Except a lot of
  test jobs assume passing critical variables via environment (like
  credentials).

  There are multiple ways to fix this:

  1. stop using environment to pass things, instead use a config file of
  some sort

  2. allow explicit pass through via -
  http://tox.readthedocs.org/en/latest/config.html#confval-passenv
  =SPACE-SEPARATED-GLOBNAMES

  This bug mostly exists for tracking patches, and ensuring that people
  realize there is a larger change here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1455102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464116] [NEW] 'network_id' and 'cidr' should be unique int table 'Subnet'

2015-06-11 Thread shihanzhang
Public bug reported:

'network_id' and 'cidr' should be unique int table 'Subnet', so unique
constraints should be added!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464116

Title:
  'network_id' and 'cidr' should be unique int table 'Subnet'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  'network_id' and 'cidr' should be unique int table 'Subnet', so unique
  constraints should be added!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 986980] Re: No documentation about token backends

2015-06-11 Thread Lana
** Changed in: openstack-manuals
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/986980

Title:
  No documentation about token backends

Status in OpenStack Identity (Keystone):
  Confirmed
Status in OpenStack Manuals:
  Fix Released

Bug description:
  Documentation lacks of information about token backends: backends
  available, options, memcached configuration for memcached backend,...

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/986980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464034] Re: Kilo glance image create stuck in saving and queued state

2015-06-11 Thread Alfred Shen
Further investigation found that the upload failure was due to
http_proxy. It prevented glanceclient from accessing API port which
happen to be on a public IP. Using private IPs might mask this issue.

** Changed in: glance
 Assignee: (unassigned) = Alfred Shen (alfredcs)

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1464034

Title:
  Kilo glance image create stuck in saving and queued state

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  In Kilo release, glance image-create with or without --os-image-api-
  version caused queued status with following debug output. No error
  log on keystone-api/registry.log. Othe glance CLIs work OK.

  Similar symptom was reported in
  https://bugs.launchpad.net/bugs/1146830 but seemed to be with diff
  cause.

  # dpkg -l | grep glance
  ii  glance  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api  1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - API
  ii  glance-common   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance   1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store 0.4.0-0ubuntu1~cloud0 
all  OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient 1:0.15.0-0ubuntu1~cloud0  
all  Client library for Openstack glance server.


  $  glance --debug --os-image-api-version 2 image-create --file 
/tmp/cirros-0.3.4-x86_64-disk.img  --disk-format qcow2 --container-format bare  
--progress 
  curl -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}252d465682ed3a3d092b0ce05954601afb56c7df' -H 'Content-Type: 
application/octet-stream' http://3.39.89.230:9292/v2/schemas/image

  HTTP/1.0 200 OK
  content-length: 3867
  via: 1.0 sjc1intproxy01 (squid/3.1.10)
  x-cache: MISS from sjc1intproxy01
  x-cache-lookup: MISS from sjc1intproxy01:8080
  connection: keep-alive
  date: Wed, 10 Jun 2015 21:41:24 GMT
  content-type: application/json; charset=UTF-8
  x-openstack-request-id: req-req-d0430e7a-5e36-466d-8d79-afefe9737695

  {additionalProperties: {type: string}, name: image, links:
  [{href: {self}, rel: self}, {href: {file}, rel:
  enclosure}, {href: {schema}, rel: describedby}],
  properties: {status: {enum: [queued, saving, active,
  killed, deleted, pending_delete], type: string,
  description: Status of the image (READ-ONLY)}, tags: {items:
  {type: string, maxLength: 255}, type: array, description:
  List of strings related to the image}, kernel_id: {pattern:
  ^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-
  fA-F]){4}-([0-9a-fA-F]){12}$, type: string, description: ID of
  image stored in Glance that should be used as the kernel when booting
  an AMI-style image., is_base: false}, container_format: {enum:
  [null, ami, ari, aki, bare, ovf, ova], type: [null,
  string], description: Format of the container}, min_ram:
  {type: integer, description: Amount of ram (in MB) required to
  boot image.}, ramdisk_id: {pattern: ^([0-9a-fA-F]){8}-([0-9a-
  fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$,
  type: string, description: ID of image stored in Glance that
  should be used as the ramdisk when booting an AMI-style image.,
  is_base: false}, locations: {items: {required: [url,
  metadata], type: object, properties: {url: {type:
  string, maxLength: 255}, metadata: {type: object}}}, type:
  array, description: A set of URLs to access the image file kept
  in external store}, visibility: {enum: [public, private],
  type: string, description: Scope of image accessibility},
  updated_at: {type: string, description: Date and time of the
  last image modification (READ-ONLY)}, owner: {type: [null,
  string], description: Owner of the image, maxLength: 255},
  file: {type: string, description: (READ-ONLY)}, min_disk:
  {type: integer, description: Amount of disk space (in GB)
  required to boot image.}, virtual_size: {type: [null,
  integer], description: Virtual size of image in bytes (READ-
  ONLY)}, id: {pattern: ^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-
  fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$, type: string,
  description: An identifier for the image}, size: {type:
  [null, integer], description: Size of image 

[Yahoo-eng-team] [Bug 1456871] Re: objects.InstanceList.get_all(context, ['metadata', 'system_metadata']) return error can't locate strategy for %s %s % (cls, key)

2015-06-11 Thread zhaoyim
@Sylvain Bauza Sorry! please ignore the trace. I updated the issue
description and the problem should belong to nova component. Please have
a look. Thanks a lot!

** Changed in: nova
   Status: Invalid = New

** Description changed:

- In our code we invoke the code as following:
+ When invoke
  
  objects.InstanceList.get_all(context, ['metadata','system_metadata'])
  
- It throw the error said:
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager 
strat = self._get_strategy(loader.strategy)
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/interfaces.py, line 452, in 
_get_strategy
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager 
cls = self._strategy_lookup(*key)
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/interfaces.py, line 507, in 
_strategy_lookup
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager 
raise Exception(can't locate strategy for %s %s % (cls, key))
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager 
Exception: can't locate strategy for class 
'sqlalchemy.orm.properties.ColumnProperty' (('lazy', 'joined'),)
- 2015-05-19 21:52:31.222 22676 TRACE nova.scheduler.ibm.ego.ego_manager
  
- The original used: db.instance_get_all(context,
- ['metadata','system_metadata'])  can worked well.
- 
- Did some investigation and found the nova/objects/instance.py  function
- _expected_cols(expected_attrs):
+ Then found the nova/objects/instance.py  function  
_expected_cols(expected_attrs):
  
  will return list ['metadata','system_metadata', 'extra',
  'extra.flavor'], then in the db query it throw the error: can't locate
  strategy for class 'sqlalchemy.orm.properties.ColumnProperty'
  (('lazy', 'joined'),)
  
  Could anyone can help have a look? Thanks!

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456871

Title:
  objects.InstanceList.get_all(context, ['metadata','system_metadata'])
  return error can't locate strategy for %s %s % (cls, key)

Status in OpenStack Compute (Nova):
  New

Bug description:
  When invoke

  objects.InstanceList.get_all(context, ['metadata','system_metadata'])

  
  Then found the nova/objects/instance.py  function  
_expected_cols(expected_attrs):

  will return list ['metadata','system_metadata', 'extra',
  'extra.flavor'], then in the db query it throw the error: can't locate
  strategy for class 'sqlalchemy.orm.properties.ColumnProperty'
  (('lazy', 'joined'),)

  Could anyone can help have a look? Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464461] [NEW] delete action always cause error ( in kilo)

2015-06-11 Thread koji
Public bug reported:

When i did any delete actions (delete router, delete network etc...) in
japanese environment , always get a error page.

horizon error logs:
-
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
return self.dispatch(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
return handler(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 223, in 
post
return self.get(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 159, in 
get
handled = self.construct_tables()
  File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 150, in 
construct_tables
handled = self.handle_table(table)
  File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 125, in 
handle_table
handled = self._tables[name].maybe_handle()
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1640, in 
maybe_handle
return self.take_action(action_name, obj_id)
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1482, in 
take_action
response = action.multiple(self, self.request, obj_ids)
  File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 302, 
in multiple
return self.handle(data_table, request, object_ids)
  File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 828, 
in handle
exceptions.handle(request, ignore=ignore)
  File /usr/lib/python2.7/site-packages/horizon/exceptions.py, line 364, in 
handle
six.reraise(exc_type, exc_value, exc_traceback)
  File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 817, 
in handle
(self._get_action_name(past=True), datum_display))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: ordinal 
not in range(128)
-

It occurs in japanese,korean,chinese,french and deutsche, not occurs in
english and spanish.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464461

Title:
  delete action always cause error ( in kilo)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When i did any delete actions (delete router, delete network etc...)
  in japanese environment , always get a error page.

  horizon error logs:
  -
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
  return handler(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 223, 
in post
  return self.get(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 159, 
in get
  handled = self.construct_tables()
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 150, 
in construct_tables
  handled = self.handle_table(table)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 125, 
in handle_table
  handled = self._tables[name].maybe_handle()
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1640, 
in maybe_handle
  return 

[Yahoo-eng-team] [Bug 1464465] [NEW] RFE: binding_type improvements

2015-06-11 Thread Ian Wells
Public bug reported:

I'm trying to get a spec approved for Nova that would make it tell us
what binding types are available.  This would seem like a golden
opportunity to clean the interface up.

At the moment Neutron works out a binding_type it will tell Nova (which
Nova must support or else).  Nova is amazingly psychic, and knows that
for many binding types, some specific widget will be created with a
certain specific name.  I suggest that - to simplify the Nova code, and
eventually remove the complexity of its synchronised guesswork - we do
as much work as possible in Neutron, and we ensure that we pass all this
information over in binding_details.

In some cases this can lead to something that might be considered a
security issue, or at least a bit dangerous, like Neutron telling Nova
it should do something stupid with eth0, or create a new socket at
/etc/passwd.  We may want to put a few restrictions that both Nova and
Neutron respect, like device prefixes or file locations, for safety's
sake.

I propose that we don't change the binding code that exists, so that the
next version of Nova remains compatible with the current version of
Neutron, but we do create new binding types that use explicit exchange
instead of spooky action at a distance.  If Neutron is given a
preference list of types, it will use the most preferred one, and Nova
can offer the new types in preference to the old ones.  In the future,
we can deprecate and remove the old binding types.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464465

Title:
  RFE: binding_type improvements

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm trying to get a spec approved for Nova that would make it tell us
  what binding types are available.  This would seem like a golden
  opportunity to clean the interface up.

  At the moment Neutron works out a binding_type it will tell Nova
  (which Nova must support or else).  Nova is amazingly psychic, and
  knows that for many binding types, some specific widget will be
  created with a certain specific name.  I suggest that - to simplify
  the Nova code, and eventually remove the complexity of its
  synchronised guesswork - we do as much work as possible in Neutron,
  and we ensure that we pass all this information over in
  binding_details.

  In some cases this can lead to something that might be considered a
  security issue, or at least a bit dangerous, like Neutron telling Nova
  it should do something stupid with eth0, or create a new socket at
  /etc/passwd.  We may want to put a few restrictions that both Nova and
  Neutron respect, like device prefixes or file locations, for safety's
  sake.

  I propose that we don't change the binding code that exists, so that
  the next version of Nova remains compatible with the current version
  of Neutron, but we do create new binding types that use explicit
  exchange instead of spooky action at a distance.  If Neutron is given
  a preference list of types, it will use the most preferred one, and
  Nova can offer the new types in preference to the old ones.  In the
  future, we can deprecate and remove the old binding types.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462239] Re: libvirt: thrown NotImplementedError when running nova root-password

2015-06-11 Thread Markus Zoeller
@javeme (javaloveme):

Thanks for reporting this issue. There is a blueprint which intents 
to implement that [1]. To avoid redundant work on the same task I'll
close this issue. Feel free to offer the assignee of the blueprint your
assistance. 

[1] https://blueprints.launchpad.net/nova/+spec/libvirt-set-admin-
password

** Changed in: nova
   Status: In Progress = Invalid

** Changed in: nova
 Assignee: javeme (javaloveme) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462239

Title:
  libvirt: thrown NotImplementedError when running nova root-password

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  libvirt: thrown NotImplementedError when running nova root-password

  As we all know, the command “nova root-password” is used to change the root 
password for a server,
  but libvirt driver does not support this feature at present.

  The following is the description of the error:

  1. run nova root-password on controler.

  2. returning 501 error. nova-api log:

  2015-06-05 14:34:11.588 3993 INFO nova.api.openstack.wsgi 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] HTTP exception thrown: Unable 
to set password on instance
  2015-06-05 14:34:11.589 3993 DEBUG nova.api.openstack.wsgi 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] Returning 501 to user: Unable 
to set password on instance _call_ 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:1217
  2015-06-05 14:34:11.591 3993 INFO nova.osapi_compute.wsgi.server 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] 172.40.0.2 POST /v
  
2/8e3d0869585d486daf23865ebc85449b/servers/12d57f28-8d00-47d3-876c-ebdba7145ddf/action
 HTTP/1.1 status: 501 len: 282 time: 3.0169549

  3. thrown NotImplementedError. nova-compute log:

  2015-06-05 14:34:10.654 13446 WARNING nova.compute.manager 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 None] [instance: 12d57f28-8d00-
  47d3-876c-ebdba7145ddf] set_admin_password is not implemented by this driver 
or guest instance.
  2015-06-05 14:34:11.532 13446 ERROR oslo.messaging.rpc.dispatcher 
[req-0dc1e6b2-e700-4dfb-a388-c3ddbc0db7e3 ] Exception during messa
  ge handling: set_admin_password is not implemented by this driver or guest 
instance.
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispat
  cher.py, line 133, in _dispatch_and_reply
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispat
  cher.py, line 176, in _dispatch
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispat
  cher.py, line 122, in _do_dispatch
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py,
  line 403, in decorated_function
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/exception.py, line
  88, in wrapped
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/openstack/common/exc
  utils.py, line 68, in _exit_
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/exception.py, line
  71, in wrapped
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py,
  line 284, in decorated_function
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/openstack/common/exc
  utils.py, line 68, in _exit_
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py,
  line 270, in decorated_function
  2015-06-05 14:34:11.532 13446 TRACE oslo.messaging.rpc.dispatcher File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py,
  line 337, in decorated_function
  

[Yahoo-eng-team] [Bug 1464290] [NEW] UnboundLocalError in neutron/db/l3_db.py (Icehouse)

2015-06-11 Thread Florian Ermisch
Public bug reported:

Hi,

working on my SaltStack-modules (outdated versions [0] and [1]) for
managing subnets in Icehouse-Neutron I managed to cause this error in
the neutron-server on Ubuntu trusty:

2015-06-11 16:49:33.636 10605 DEBUG neutron.openstack.common.rpc.amqp 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] UNIQUE_ID is 
5fada601c2ca49c5a777f690b0426a45. _add_unique_id 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py:342
2015-06-11 16:49:33.641 10605 ERROR neutron.api.v2.resource 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] add_router_interface failed
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 200, in 
_handle_action
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py, line 362, in 
add_router_interface
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource 'tenant_id': 
subnet['tenant_id'],
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource UnboundLocalError: 
local variable 'subnet' referenced before assignment
2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource 
2015-06-11 16:49:33.650 10605 INFO neutron.wsgi 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] 192.168.122.85 - - [11/Jun/2015 
16:49:33] PUT 
/v2.0/routers/8afd9ee7-dd37-47f3-b2e1-42805e984a61/add_router_interface.json 
HTTP/1.1 500 296 0.065534

Installed neutron-packages:

root@controller:~# dpkg -l neutron\* | grep ^ii
ii  neutron-common  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - common
ii  neutron-dhcp-agent  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - DHCP agent
ii  neutron-l3-agent1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - l3 agent
ii  neutron-metadata-agent  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - metadata agent
ii  neutron-plugin-ml2  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - ML2 plugin
ii  neutron-plugin-openvswitch-agent1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - Open vSwitch 
plug
in agent

 ii  neutron-server  1:2014.1.4-0ubuntu2   
all  Neutron is a virtual network service for Openstack - server

More details tomorrow, when I've added some more debugging to my code.

Regards, Florian

[0] 
https://github.com/fraunhoferfokus/openstack-formula/blob/master/_modules/neutron.py
[1] 
https://github.com/fraunhoferfokus/openstack-formula/blob/master/_states/neutron_subnet.py

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: in-stable-icehouse

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464290

Title:
  UnboundLocalError in neutron/db/l3_db.py (Icehouse)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi,

  working on my SaltStack-modules (outdated versions [0] and [1]) for
  managing subnets in Icehouse-Neutron I managed to cause this error in
  the neutron-server on Ubuntu trusty:

  2015-06-11 16:49:33.636 10605 DEBUG neutron.openstack.common.rpc.amqp 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] UNIQUE_ID is 
5fada601c2ca49c5a777f690b0426a45. _add_unique_id 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py:342
  2015-06-11 16:49:33.641 10605 ERROR neutron.api.v2.resource 
[req-f47d6292-09bb-4f03-999b-cd1458c3828b None] add_router_interface failed
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-06-11 16:49:33.641 10605 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1450438] Re: loopingcall: if a time drift to the future occurs, all timers will be blocked

2015-06-11 Thread Elena Ezhova
** Project changed: oslo-incubator = oslo.service

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450438

Title:
  loopingcall: if a time drift to the future occurs, all timers will be
  blocked

Status in OpenStack Compute (Nova):
  Triaged
Status in Library for running OpenStack services:
  New

Bug description:
  Due to the fact that loopingcall.py uses time.time for recording wall-
  clock time which is not guaranteed to be monotonic, if a time drift to
  the future occurs, and then gets corrected, all the timers will get
  blocked until the actual time reaches the moment of the original
  drift.

  This can be pretty bad if the interval is not insignificant - in
  Nova's case - all services uses FixedIntervalLoopingCall for it's
  heartbeat periodic tasks - if a drift is on the order of magnitude of
  several hours, no heartbeats will happen.

  DynamicLoopingCall is affected by this as well but because it relies
  on eventlet which would also use a non-monotonic time.time function
  for it's internal timers.

  Solving this will require looping calls to start using a monotonic
  timer (for python 2.7 there is a monotonic package).

  Also all services that want to use timers and avoid this issue should
  doe something like

import monotonic

hub = eventlet.get_hub()
hub.clock = monotonic.monotonic

  immediately after calling eventlet.monkey_patch()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464291] [NEW] migration scripts ignore mysq-engine option

2015-06-11 Thread Vladislav Belogrudov
Public bug reported:

When running neutron-db-manage upgrade --mysql-engine SOME-ENGINE-HERE
head migration scripts still create tables with default engine = InnoDB

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Vladislav Belogrudov (vlad-belogrudov)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464291

Title:
  migration scripts ignore mysq-engine option

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When running neutron-db-manage upgrade --mysql-engine SOME-ENGINE-
  HERE head migration scripts still create tables with default engine =
  InnoDB

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464298] [NEW] default hash function and hash format changed in OpenSSH 6.8 (ssh-keygen)

2015-06-11 Thread Victor Stinner
Public bug reported:

The following tests fail on Fedora 22 because ssh-keygen output changed
in OpenSSH 6.8:

* nova.tests.unit.api.ec2.test_cloud.CloudTestCase.test_import_key_pair
* nova.tests.unit.compute.test_keypairs.ImportKeypairTestCase.test_success_ssh

Before OpenSSH used MD5 and hex with colons to display a fingerprint. It
now uses SHA256 encoded to base64:


 * Add FingerprintHash option to ssh(1) and sshd(8), and equivalent
   command-line flags to the other tools to control algorithm used
   for key fingerprints. The default changes from MD5 to SHA256 and
   format from hex to base64.

http://www.openssh.com/txt/release-6.8

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464298

Title:
  default hash function and hash format changed in OpenSSH 6.8 (ssh-
  keygen)

Status in OpenStack Compute (Nova):
  New

Bug description:
  The following tests fail on Fedora 22 because ssh-keygen output
  changed in OpenSSH 6.8:

  * nova.tests.unit.api.ec2.test_cloud.CloudTestCase.test_import_key_pair
  * nova.tests.unit.compute.test_keypairs.ImportKeypairTestCase.test_success_ssh

  Before OpenSSH used MD5 and hex with colons to display a fingerprint.
  It now uses SHA256 encoded to base64:

  
   * Add FingerprintHash option to ssh(1) and sshd(8), and equivalent
 command-line flags to the other tools to control algorithm used
 for key fingerprints. The default changes from MD5 to SHA256 and
 format from hex to base64.
  
  http://www.openssh.com/txt/release-6.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp