[Yahoo-eng-team] [Bug 1528091] [NEW] -1 is not a valid value in some Security Group Rules creation forms

2015-12-20 Thread LIU Yulong
Public bug reported:

"-1 is not a valid value in some Security Group Rules creation form"

For instance, Custom ICMP Rule:
-1 is invalid which is inconsistent with it's help text:
"Enter a value for ICMP type in the range (-1: 255)"
"Enter a value for ICMP code in the range (-1: 255)"

It will raise the exception "Not a valid port number".

** Affects: horizon
 Importance: Undecided
 Assignee: LIU Yulong (dragon889)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => LIU Yulong (dragon889)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528091

Title:
  -1 is not a valid value in some Security Group Rules creation forms

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  "-1 is not a valid value in some Security Group Rules creation form"

  For instance, Custom ICMP Rule:
  -1 is invalid which is inconsistent with it's help text:
  "Enter a value for ICMP type in the range (-1: 255)"
  "Enter a value for ICMP code in the range (-1: 255)"

  It will raise the exception "Not a valid port number".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2015-12-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/258993
Committed: 
https://git.openstack.org/cgit/openstack/python-ceilometerclient/commit/?id=a63f15272c61d5b3ab54d00da1209e4d6bf6bcad
Submitter: Jenkins
Branch:master

commit a63f15272c61d5b3ab54d00da1209e4d6bf6bcad
Author: Shuquan Huang 
Date:   Thu Dec 17 21:06:01 2015 +0800

Replace assertEqual(None, *) with assertIsNone in tests

Replace assertEqual(None, *) with assertIsNone in tests to have
more clear messages in case of failure.

Change-Id: I36db8bdcb67b8cc0a3bf1f063b4a7b42955b100b
Closes-bug: #1280522


** Changed in: python-ceilometerclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in Heat Translator:
  In Progress
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-designateclient:
  In Progress
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  In Progress
Status in python-ironicclient:
  In Progress
Status in python-manilaclient:
  In Progress
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in Solum:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512564] Re: Create stack failed from Horizon UI due to No content found in the "files" section

2015-12-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/241700
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d1c3b4787b792fe6e20a3fd6e692015fd576a5f1
Submitter: Jenkins
Branch:master

commit d1c3b4787b792fe6e20a3fd6e692015fd576a5f1
Author: dixiaoli 
Date:   Wed Nov 4 17:32:29 2015 +

Add handle get_file when launch stack from horizon

when get_file is contained in template, the stack create/update/preview
will fail due to No content found in the "files" section.
So added handle get_file code.

Change-Id: I6f125f9e5f3f53f630ab0d4f3f00631e6850e905
Closes-Bug: #1512564


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1512564

Title:
  Create stack failed from Horizon UI due to No content found in the
  "files" section

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  prepare template,The content of the template file is as follows:

  heat_template_version: 2013-05-23

  description: >
Test for get_files

  parameters:
flavor:
  type: string
  description: the flavor will used by sever
  default: m1.medium
image:
  type: string
  description: the image will used by server
  default: fedora

  resources:
server1:
  type: OS::Nova::Server
  properties:
  flavor: { get_param: flavor }
  image: { get_param: image }
  user_data_format: RAW
  user_data:
get_file: https://9.5.125.106:8080/my_test.sh

  When create stack from Horizon UI, it will failed due to Error:

  Error: ERROR: HT-5DEAEB8 Property error: :
  resources.server1.properties.user_data: : HT-EEF1009 No content found
  in the "files" section for get_file path:
  https://9.5.125.106:8080/my_test.sh

  The horizon did not handle the get_file work .So it should be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1512564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483955] Re: Horizon homepage shows internal server error

2015-12-20 Thread Richard Jones
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483955

Title:
  Horizon homepage shows internal server error

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I created a devstack today having Sahara installed. And when I login, Horizon 
report a 500 internal server error as below:
  Internal Server Error

  The server encountered an internal error or misconfiguration and was
  unable to complete your request.

  Please contact the server administrator at [no address given] to
  inform them of the time this error occurred, and the actions you
  performed just before this error.

  More information about this error may be available in the server error
  log.

  


  Apache/2.4.7 (Ubuntu) Server at 127.0.0.1 Port 80

  I checked the horizon_error.log and it showed:

  2015-08-12 03:25:56.402471 Internal Server Error: /admin/
  2015-08-12 03:25:56.402502 Traceback (most recent call last):
  2015-08-12 03:25:56.402507   File 
"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 
137, in get_response
  2015-08-12 03:25:56.402511 response = response.render()
  2015-08-12 03:25:56.402515   File 
"/usr/local/lib/python2.7/dist-packages/django/template/response.py", line 103, 
in render
  2015-08-12 03:25:56.402518 self.content = self.rendered_content
  2015-08-12 03:25:56.402522   File 
"/usr/local/lib/python2.7/dist-packages/django/template/response.py", line 80, 
in rendered_content
  2015-08-12 03:25:56.402527 content = template.render(context)
  2015-08-12 03:25:56.402531   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 148, in 
render
  2015-08-12 03:25:56.402535 return self._render(context)
  2015-08-12 03:25:56.402538   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 142, in 
_render
  2015-08-12 03:25:56.402542 return self.nodelist.render(context)
  2015-08-12 03:25:56.402546   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 844, in 
render
  2015-08-12 03:25:56.402549 bit = self.render_node(node, context)
  2015-08-12 03:25:56.402553   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 80, in 
render_node
  2015-08-12 03:25:56.402556 return node.render(context)
  2015-08-12 03:25:56.402559   File 
"/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", line 
126, in render
  2015-08-12 03:25:56.402563 return compiled_parent._render(context)
  2015-08-12 03:25:56.402566   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 142, in 
_render
  2015-08-12 03:25:56.402570 return self.nodelist.render(context)
  2015-08-12 03:25:56.402573   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 844, in 
render
  2015-08-12 03:25:56.402577 bit = self.render_node(node, context)
  2015-08-12 03:25:56.402580   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 80, in 
render_node
  2015-08-12 03:25:56.402583 return node.render(context)
  2015-08-12 03:25:56.402587   File 
"/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", line 
65, in render
  2015-08-12 03:25:56.402590 result = block.nodelist.render(context)
  2015-08-12 03:25:56.402593   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 844, in 
render
  2015-08-12 03:25:56.402597 bit = self.render_node(node, context)
  2015-08-12 03:25:56.402600   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 80, in 
render_node
  2015-08-12 03:25:56.402604 return node.render(context)
  2015-08-12 03:25:56.402607   File 
"/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", line 
65, in render
  2015-08-12 03:25:56.402628 result = block.nodelist.render(context)
  2015-08-12 03:25:56.402632   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 844, in 
render
  2015-08-12 03:25:56.402636 bit = self.render_node(node, context)
  2015-08-12 03:25:56.402639   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 80, in 
render_node
  2015-08-12 03:25:56.402643 return node.render(context)
  2015-08-12 03:25:56.402646   File 
"/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", line 
150, in render
  2015-08-12 03:25:56.402650 return template.render(context)
  2015-08-12 03:25:56.402653   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 148, in 
render
  2015-08-12 03:25:56.402656 return self._render(context)
  2015-08-12 03:25:56.402660   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", 

[Yahoo-eng-team] [Bug 1528081] [NEW] Unexpected API Error

2015-12-20 Thread yangbo
Public bug reported:


2015-12-20 04:22:59.286 DEBUG nova.api.openstack.wsgi 
[req-a2cd2bde-867b-4034-b801-992f587b2c68 admin admin] Calling method '>' _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:798
2015-12-20 04:22:59.288 INFO nova.osapi_compute.wsgi.server 
[req-a2cd2bde-867b-4034-b801-992f587b2c68 admin admin] 10.109.194.142 "GET 
/v2.1/ HTTP/1.1" status: 200 len: 656 time: 0.0067410
2015-12-20 04:22:59.503 DEBUG nova.api.openstack.wsgi 
[req-eadce1c4-d582-4bf5-bb2f-c964404f3abc admin admin] Calling method '>' _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:798
2015-12-20 04:22:59.523 ERROR nova.api.openstack.extensions 
[req-eadce1c4-d582-4bf5-bb2f-c964404f3abc admin admin] Unexpected exception in 
API method
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 88, in index
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions for hyp in 
compute_nodes])
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 3463, in service_get_by_compute_host
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions return 
objects.Service.get_by_compute_host(context, host_name)
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
180, in wrapper
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/service.py", line 219, in get_by_compute_host
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions db_service 
= db.service_get_by_compute_host(context, host)
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/api.py", line 157, in service_get_by_compute_host
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions 
use_slave=use_slave)
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 531, in 
service_get_by_compute_host
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions raise 
exception.ComputeHostNotFound(host=host)
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions 
ComputeHostNotFound: Compute host controller1 could not be found.
2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions
2015-12-20 04:22:59.524 INFO nova.api.openstack.wsgi 
[req-eadce1c4-d582-4bf5-bb2f-c964404f3abc admin admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.

2015-12-20 04:22:59.525 DEBUG nova.api.openstack.wsgi 
[req-eadce1c4-d582-4bf5-bb2f-c964404f3abc admin admin] Returning 500 to user: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1180


Nova log is attached

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528081

Title:
   Unexpected API Error

Status in OpenStack Compute (nova):
  New

Bug description:
  
  2015-12-20 04:22:59.286 DEBUG nova.api.openstack.wsgi 
[req-a2cd2bde-867b-4034-b801-992f587b2c68 admin admin] Calling method '>' _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:798
  2015-12-20 04:22:59.288 INFO nova.osapi_compute.wsgi.server 
[req-a2cd2bde-867b-4034-b801-992f587b2c68 admin admin] 10.109.194.142 "GET 
/v2.1/ HTTP/1.1" status: 200 len: 656 time: 0.0067410
  2015-12-20 04:22:59.503 DEBUG nova.api.openstack.wsgi 
[req-eadce1c4-d582-4bf5-bb2f-c964404f3abc admin admin] Calling method '>' _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:798
  2015-12-20 04:22:59.523 ERROR nova.api.openstack.extensions 
[req-eadce1c4-d582-4bf5-bb2f-c964404f3abc admin admin] Unexpected exception in 
API method
  2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-20 04:22:59.523 6543 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1475536] Re: Uploading in ceph using swift CLI giving error

2015-12-20 Thread Richard Jones
Not a Horizon bug, as far as I can tell.

** Project changed: horizon => swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475536

Title:
  Uploading in ceph using swift CLI giving error

Status in OpenStack Object Storage (swift):
  New

Bug description:
  swift -A http://172.18.59.201/auth/v1.0 -U user -K key upload armaan-
  bucket3 xyz.txt

  When above command is executed then it gives error:
  Object HEAD failed: http://172.18.59.201/swift/v1/armaan-bucket2/xyz.txt 401 
Unauthorized

  Details of stat is as below:
  swift -A http://172.18.59.201/auth/v1.0 -U user -K key stat armaan-bucket3 

  After above command details are :
   Account: v1
  Container: armaan-bucket3
Objects: 0
  Bytes: 0
   Read ACL: .r:*
  Write ACL: Armaan
Sync To:
   Sync Key:
 Keep-Alive: timeout=5, max=100
 Server: Apache/2.4.7 (Ubuntu)
  X-Container-Bytes-Used-Actual: 0
 Connection: Keep-Alive
   Content-Type: text/plain; charset=utf-8

  I have been trying this issue since last one week so quick solution
  will be appreciated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/swift/+bug/1475536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525132] Re: doc error about inject files to new build

2015-12-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/259566
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=7451643f1dbe0cbe37621cb9112ff97a90bd6301
Submitter: Jenkins
Branch:master

commit 7451643f1dbe0cbe37621cb9112ff97a90bd6301
Author: jichenjc 
Date:   Fri Dec 18 11:24:38 2015 +0800

Remove incorrect descriptions for 'injection'

nova allows binary injection, the description is incorrect.

Change-Id: I54a3aeb8d0c4e62e57e26a6d0867f1ca1101d804
Partial-Bug: #1515222
Closes-Bug: #1525132


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525132

Title:
  doc error about inject files to new build

Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-api-site:
  Fix Released

Bug description:
  we tell user  'You cannot inject binary or zip files into a new build' in 
following
  
https://github.com/openstack/nova/blob/master/api-guide/source/server_concepts.rst
  http://developer.openstack.org/api-ref-compute-v2.1.html

  
  ichen@devstack1:/opt/stack/nova$ nova boot --file 
/abc.tgz=/home/jichen/cert.tgz --image 9eee793a-25e5-4f42-bd9e-b869e60d3dbd 
--flavor m1.micro t5
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  |  
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| KcGDaWG7SdFZ 
  |
  | config_drive |  
  |
  | created  | 2015-12-11T09:04:33Z 
  |
  | flavor   | m1.micro (84)
  |
  | hostId   |  
  |
  | id   | ec9b463d-0670-4bb0-8e1b-494007fc5cfc 
  |
  | image| cirros-0.3.4-x86_64-uec 
(9eee793a-25e5-4f42-bd9e-b869e60d3dbd) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | t5   
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | os-pci:pci_devices   | []   
  |
  | progress | 0
  |
  | security_groups  | default  
  |
  | status   | BUILD
  |
  | tenant_id| d1c5aa58af6c426492c642eb649017be 
  |
  | updated  | 2015-12-11T09:04:33Z 
  |
  | user_id  | 53a9e08a52eb4486aa4457f325e62b8a 
  |
  
+--++
  jichen@devstack1:/opt/stack/nova$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  

[Yahoo-eng-team] [Bug 1525137] Re: doc wrong about BAK extensions of injection

2015-12-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/259567
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=426a455d3d5a517f123162631de038d763a0
Submitter: Jenkins
Branch:master

commit 426a455d3d5a517f123162631de038d763a0
Author: jichenjc 
Date:   Fri Dec 18 11:27:07 2015 +0800

Remove incorrect comments on injection BAK method

The BAK method is not correct based on existing mechanism

Change-Id: I1615379763acdf81ffa356fb53049e8ba790685b
Partial-Bug: #1515222
Closes-Bug: #1525137


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525137

Title:
  doc wrong about BAK extensions of injection

Status in OpenStack Compute (nova):
  In Progress
Status in openstack-api-site:
  Fix Released

Bug description:
  I cat following contents into a.pwd , note the '1' at /bin/sh1 of ssh line 
it's on purpose
  after following opreations, I didn't see fllowing mechanism talked in doc
  For example, if the /etc/passwd file exists, it is backed up as 
/etc/passwd.bak.1246036261.5785.

  
  
https://github.com/openstack/nova/blob/master/api-guide/source/server_concepts.rst
  http://developer.openstack.org/api-ref-compute-v2.1.html

  
  $ sudo cat passwd
  root:x:0:0:root:/root:/bin/sh
  daemon:x:1:1:daemon:/usr/sbin:/bin/sh
  bin:x:2:2:bin:/bin:/bin/sh
  sys:x:3:3:sys:/dev:/bin/sh
  sync:x:4:100:sync:/bin:/bin/sync
  mail:x:8:8:mail:/var/spool/mail:/bin/sh
  proxy:x:13:13:proxy:/bin:/bin/sh
  www-data:x:33:33:www-data:/var/www:/bin/sh
  backup:x:34:34:backup:/var/backups:/bin/sh
  operator:x:37:37:Operator:/var:/bin/sh
  haldaemon:x:68:68:hald:/:/bin/sh
  dbus:x:81:81:dbus:/var/run/dbus:/bin/sh
  ftp:x:83:83:ftp:/home/ftp:/bin/sh
  nobody:x:99:99:nobody:/home:/bin/sh
  sshd:x:103:99:Operator:/var:/bin/sh1
  cirros:x:1000:1000:non-root user:/home/cirros:/bin/sh

  
  do the following 

  jichen@devstack1:/opt/stack/nova$ nova boot --file 
/etc/passwd=/home/jichen/a.pwd --image 9eee793a-25e5-4f42-bd9e-b869e60d3dbd 
--flavor m1.micro t6
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  |  
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| 9VUVZY53nbFb 
  |
  | config_drive |  
  |
  | created  | 2015-12-11T09:19:28Z 
  |
  | flavor   | m1.micro (84)
  |
  | hostId   |  
  |
  | id   | 80f24559-2bd9-4709-b1a2-36709cfb3b50 
  |
  | image| cirros-0.3.4-x86_64-uec 
(9eee793a-25e5-4f42-bd9e-b869e60d3dbd) |

  
  jichen@devstack1:/opt/stack/nova$ ssh cirros@10.0.0.17
  The authenticity of host '10.0.0.17 (10.0.0.17)' can't be established.
  RSA key fingerprint is c2:44:7a:4f:61:cb:1b:95:b4:3a:49:fe:ce:dc:1e:20.
  Are you sure you want to continue connecting (yes/no)? yes
  Warning: Permanently added '10.0.0.17' (RSA) to the list of known hosts.
  cirros@10.0.0.17's password:

  
  $ cd /etc
  $ ls
  TZ cirros fstab  init.d ld.so.conf 
mtab   profileresolv.confshadow
  acpi   cirros-initgroup  inittabld.so.conf.d   
networkprotocols  screenrc   ssl
  blkid.tab  defaulthostname 

[Yahoo-eng-team] [Bug 1508126] Re: With hierarchical port binding, when vm migrating across ToR switch, ovs agent will add wrong network-related flows in ovs bridges on new host.

2015-12-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508126

Title:
  With hierarchical port binding, when vm migrating across ToR switch,
  ovs agent will add wrong network-related flows in ovs bridges on new
  host.

Status in neutron:
  Expired

Bug description:
  With hierarchical port binding, when vm migrating across ToR switch, in the 
first step, vm port plug in br-int on new host, 
  ovs agent get port details by calling get_devices_details_list, but the 
bottom vlan segment is old before port hostid is updated and bottom vlan 
segment is reallocated. Thus, ovs agent will add wrong network-related flows in 
ovs bridges.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507489] Re: manually reschedule dhcp-agent doesn't update port binding

2015-12-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507489

Title:
  manually reschedule dhcp-agent doesn't update port binding

Status in neutron:
  Expired

Bug description:
  We can use dhcp-agent-network-add/remove to manually reschedule a net between 
dhcp-agents. And "neutron dhcp-agent-list-hosting-net" and ip or ps commands 
can be used to confirm network is rescheduled to new agent.
  But dhcp port binding doesn't get updated in db site, we can use "neutron 
port-show" to find that, after network is rescheduled.

  Pre-conditions:
  2 active dhcp-agents, agent-A and agent-B;
  network net-1 is bound on agent-A, use "neutron dhcp-agent-list-hosting-net" 
can to verify this;
  port-1 is dhcp port of net-1;
  set dhcp_agents_per_network = 1 in /etc/neutron/neutron.conf;

  steps:
  neutron dhcp-agent-network-remove AGENT-A-ID NET-1-ID ; neutron port-show 
PORT-1-ID
  [1]
  neutron dhcp-agent-network-add AGENT-B-ID NET-1-ID ; neutron port-show 
PORT-1-ID
  [2]

  expected:
  [1]:
  Field  Value
  binding:host_id  EMPTY
  binding:profile   {}
  binding:vif_details  {}
  binding:vif_type unbound
  binding:vnic_type   normal
  device_id   reserved_dhcp_port
  

  [2]:
  Field  Value
  binding:host_id  AGENT-B-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-B-host)-NET-1-ID

  Actual output:
  [1]
  Field  Value
  binding:host_id  AGENT-A-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID
  [2]

  Field  Value
  binding:host_id  AGENT-A-HOST-ID
  binding:profile   {}
  binding:vif_details  {"port_filter": true, "ovs_hybrid_plug": true}
  binding:vif_type ovs
  binding:vnic_type   normal
  device_id   dhcpxxx(relate-to-agent-A-host)-NET-1-ID

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524114] Re: nova-scheduler also loads deleted instances at startup

2015-12-20 Thread Tardis Xu
*** This bug is a duplicate of bug 1524421 ***
https://bugs.launchpad.net/bugs/1524421

** This bug has been marked a duplicate of bug 1524421
   Host Manager reads deleted instance info on startup

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524114

Title:
  nova-scheduler also loads deleted instances at startup

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  nova-scheduler is loading all instances (including deleted) at
  startup.

  Experienced problems when each node has >6000 deleted instances, even when 
using batches of 10 nodes.
  Each query can take several minutes and transfer several GB of data.
  This prevented nova-scheduler connect to rabbitmq.

  
  ###
  When nova-scheduler starts it calls "_async_init_instance_info()" and it does 
an "InstanceList.get_by_filters" that uses batches of 10 nodes. This uses 
"instance_get_all_by_filters_sort", however "Deleted instances will be returned 
by default, unless there's a filter that says otherwise".
  Adding the filter: {"deleted": False} fixes the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461406] Re: libvirt: missing iotune parse for LibvirtConfigGuestDisk

2015-12-20 Thread Tony Breeds
If we can find a valid consumer for this information from the domain
XMLt hen we can add the code as a specless blueprint or similar.

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
 Assignee: ChangBo Guo(gcb) (glongwave) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461406

Title:
  libvirt: missing  iotune parse for  LibvirtConfigGuestDisk

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  We support  instance disk IO control with  iotune like :


  102400


  we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
  Need fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527993] [NEW] bgp-statistics

2015-12-20 Thread vikram.choudhary
Public bug reported:

[Existing problem]
Current BGP dynamic routing proposal [1]_ doesn't have support for getting BGP 
peer state and statistical information. Such information could be critical for 
debugging.

[Proposal]
- Existing BGP dynamic routing framework will be extended for supporting BGP 
peer state and statistical information.
- Additional display CLI's will be added.

[Benefits]
- Debugging will be strengthened.

[What is the enhancement?]
- Add debugging framework.
- Add interface for retrieving and displaying BGP peer statistics and states. 

[Related information]
[1] Dynamic Advertising Routes for Public Ranges

https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Summary changed:

- bgp-display
+ bgp-statistics

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527993

Title:
  bgp-statistics

Status in neutron:
  New

Bug description:
  [Existing problem]
  Current BGP dynamic routing proposal [1]_ doesn't have support for getting 
BGP peer state and statistical information. Such information could be critical 
for debugging.

  [Proposal]
  - Existing BGP dynamic routing framework will be extended for supporting BGP 
peer state and statistical information.
  - Additional display CLI's will be added.

  [Benefits]
  - Debugging will be strengthened.

  [What is the enhancement?]
  - Add debugging framework.
  - Add interface for retrieving and displaying BGP peer statistics and states. 

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528000] [NEW] bgp-route-aggregation

2015-12-20 Thread vikram.choudhary
Public bug reported:

[Existing problem]
Current BGP dynamic routing proposal [1]_ doesn't have support for route 
aggregation. Route aggregation could be extremely useful in reducing the size 
of the routing table and improves CPU utilization [2]_.

[Proposal]
- Add route-aggregation support to BGP dynamic routing.

[Benefits]
- Route aggregation is good as it reduces the size, and slows the growth, of 
the Internet routing table.
- Amount of resources (e.g., CPU and memory) required to process routing 
information is reduced and route calculation is sped up. 
- Route flaps becomes limited in number

[What is the enhancement?]
- - Additional API, CLI and DB model will be added.

[Related information]
[1] Dynamic Advertising Routes for Public Ranges

https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html
[2] A Framework for Inter-Domain Route Aggregation
https://tools.ietf.org/html/rfc2519

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528000

Title:
  bgp-route-aggregation

Status in neutron:
  New

Bug description:
  [Existing problem]
  Current BGP dynamic routing proposal [1]_ doesn't have support for route 
aggregation. Route aggregation could be extremely useful in reducing the size 
of the routing table and improves CPU utilization [2]_.

  [Proposal]
  - Add route-aggregation support to BGP dynamic routing.

  [Benefits]
  - Route aggregation is good as it reduces the size, and slows the growth, of 
the Internet routing table.
  - Amount of resources (e.g., CPU and memory) required to process routing 
information is reduced and route calculation is sped up. 
  - Route flaps becomes limited in number

  [What is the enhancement?]
  - - Additional API, CLI and DB model will be added.

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html
  [2] A Framework for Inter-Domain Route Aggregation
  https://tools.ietf.org/html/rfc2519

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528031] [NEW] 'NetworkNotFound' exception during listing ports

2015-12-20 Thread Andrey Pavlov
Public bug reported:

There is a problem - when I run tests in parallel then one/two can fail.
As I see in logs one thread is deleting network while second thread is
listing all ports. And second thread get exception 'NetworkNotFound'.

Part of neutron service logs is:

2015-12-18 06:29:05.151 INFO neutron.wsgi 
[req-4d303e7d-ae31-47b5-a644-552fceeb03ef user-0a50ad96 project-ce45a55a] 
52.90.96.102 - - [18/Dec/2015 06:29:05] "DELETE 
/v2.0/networks/d2d2481a-4c20-452f-8088-6e6815694ac0.json HTTP/1.1" 204 173 
0.426808
2015-12-18 06:29:05.173 ERROR neutron.policy 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] 
Policy check error while calling >!
2015-12-18 06:29:05.173 22048 ERROR neutron.policy Traceback (most recent call 
last):
2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/policy.py", line 258, in __call__
2015-12-18 06:29:05.173 22048 ERROR neutron.policy fields=[parent_field])
2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 713, in get_network
2015-12-18 06:29:05.173 22048 ERROR neutron.policy result = 
super(Ml2Plugin, self).get_network(context, id, None)
2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 385, in get_network
2015-12-18 06:29:05.173 22048 ERROR neutron.policy network = 
self._get_network(context, id)
2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_common.py", line 188, in 
_get_network
2015-12-18 06:29:05.173 22048 ERROR neutron.policy raise 
n_exc.NetworkNotFound(net_id=id)
2015-12-18 06:29:05.173 22048 ERROR neutron.policy NetworkNotFound: Network 
d2d2481a-4c20-452f-8088-6e6815694ac0 could not be found.
2015-12-18 06:29:05.173 22048 ERROR neutron.policy 
2015-12-18 06:29:05.175 INFO neutron.api.v2.resource 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] index 
failed (client error): Network d2d2481a-4c20-452f-8088-6e6815694ac0 could not 
be found.
2015-12-18 06:29:05.175 INFO neutron.wsgi 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] 
52.90.96.102 - - [18/Dec/2015 06:29:05] "GET 
/v2.0/ports.json?tenant_id=63f912ca152048c6a6b375784d90bd37 HTTP/1.1" 404 359 
0.311871


Answer from Kevin Benton (in mailing list):
Ah, I believe what is happening is that the network is being deleted after the 
port has been retrieved from the database during the policy check. The policy 
check retrieves the port's network to be able to enforce the network_owner 
lookup: https://github.com/openstack/neutron/blob/master/etc/policy.json#L6

So order of events seems to be:

port list API call received
ports retrieved from db
network delete request is processed
ports processed by policy engine
policy engine triggers network lookup and hits 404


This appears to be a legitimate bug. Maybe we need to find a way to cache the 
network at port retrieval time for the policy engine.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528031

Title:
  'NetworkNotFound' exception during listing ports

Status in neutron:
  New

Bug description:
  There is a problem - when I run tests in parallel then one/two can fail.
  As I see in logs one thread is deleting network while second thread is
  listing all ports. And second thread get exception 'NetworkNotFound'.

  Part of neutron service logs is:

  2015-12-18 06:29:05.151 INFO neutron.wsgi 
[req-4d303e7d-ae31-47b5-a644-552fceeb03ef user-0a50ad96 project-ce45a55a] 
52.90.96.102 - - [18/Dec/2015 06:29:05] "DELETE 
/v2.0/networks/d2d2481a-4c20-452f-8088-6e6815694ac0.json HTTP/1.1" 204 173 
0.426808
  2015-12-18 06:29:05.173 ERROR neutron.policy 
[req-a406e696-6791-4345-8b04-215ca313ea67 user-0a50ad96 project-ce45a55a] 
Policy check error while calling >!
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy Traceback (most recent 
call last):
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/policy.py", line 258, in __call__
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy fields=[parent_field])
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 713, in get_network
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy result = 
super(Ml2Plugin, self).get_network(context, id, None)
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 385, in get_network
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy network = 
self._get_network(context, id)
  2015-12-18 06:29:05.173 22048 ERROR neutron.policy   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_common.py", line 188, in 
_get_network
 

[Yahoo-eng-team] [Bug 1528002] [NEW] bgp-route-policing

2015-12-20 Thread vikram.choudhary
Public bug reported:

[Existing problem]
Current BGP dynamic routing proposal [1]_ doesn't support route filtering. By 
Default, all the routes will be advertised. There is no way by which an admin 
can filter routes before advertisement.

[Proposal]
- Add route-policy support to BGP dynamic routing.

[Benefits]
- Adds flexibility for filtering routes per BGP peering session.
- Can provide more options for modifying route attributes, if required.

[What is the enhancement?]
- - Additional API, CLI and DB model will be added.

[Related information]
[1] Dynamic Advertising Routes for Public Ranges

https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528002

Title:
  bgp-route-policing

Status in neutron:
  New

Bug description:
  [Existing problem]
  Current BGP dynamic routing proposal [1]_ doesn't support route filtering. By 
Default, all the routes will be advertised. There is no way by which an admin 
can filter routes before advertisement.

  [Proposal]
  - Add route-policy support to BGP dynamic routing.

  [Benefits]
  - Adds flexibility for filtering routes per BGP peering session.
  - Can provide more options for modifying route attributes, if required.

  [What is the enhancement?]
  - - Additional API, CLI and DB model will be added.

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527991] [NEW] SR-IOV port doesn't reach router internal port when they on the same physical server

2015-12-20 Thread Moshe Levi
Public bug reported:

When I used SR-IOV port instance and route port, that reside on the same  
physical server I can't use floating ip and access the vm.
but it is working if the  SR-IOV port instance is on different physical server

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: New


** Tags: sriov-pci-pt

** Tags added: sriov-pci-pt

** Changed in: neutron
 Assignee: (unassigned) => Moshe Levi (moshele)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527991

Title:
  SR-IOV port doesn't reach router internal  port when they on the same
  physical server

Status in neutron:
  New

Bug description:
  When I used SR-IOV port instance and route port, that reside on the same  
physical server I can't use floating ip and access the vm.
  but it is working if the  SR-IOV port instance is on different physical server

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528114] [NEW] vmware start instance from snapshot error

2015-12-20 Thread linbing
Public bug reported:

1. I take a snapshot from vmware instance, then the snapshot image(which is 
link_clone of snapshot) will be saved in glance server.
2. Start from this snapshot image from glance server. then got the following 
error

2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
   value = "task-1896"
   _type = "Task"
 }. _poll_task /usr/lib/python2.7/site-packages/oslo_vmware/api.py:397
2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121

2015-12-15 01:32:05.285 25992 DEBUG oslo_vmware.exceptions [-] Fault 
InvalidArgument not matched. get_fault_class 
/usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:296
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall [-] in fixed 
duration looping call
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, 
in _inner
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 428, in _poll_task
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall raise 
task_ex
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 
VimFaultException: 指定的参数错误。
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall capacity
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Faults: 
['InvalidArgument']
2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall
2015-12-15 01:32:05.286 25992 ERROR nova.virt.vmwareapi.vmops 
[req-c466c53c-0a9c-45d7-aa78-c8812b4021a2 4b7fde8604c24e919e46b68fdf50b5a5 
b0eab665ecd94e86885e03027ab90528 - - -] [instance: 
bea53465-ac4f-40f4-9937-f99024a8075d] Extending virtual disk failed with error: 
指定的参数错误。
capacity

3. I track the error, 
nova/virt/vmwareapi/vmops.py
 def spawn()
self._use_disk_image_as_linked_clone(vm_ref, vi)
  self._extend_if_required
self._extend_virtual_disk() 
def _extend_virtual_disk()
vmdk_extend_task = self._session._call_method(
self._session.vim,
"ExtendVirtualDisk_Task",
service_content.virtualDiskManager,
name=name,
datacenter=dc_ref,
newCapacityKb=requested_size,
eagerZero=False)
my vimserver is :
/opt/stack/vmware/wsdl/5.0/vimService.wsdl
vcenter version is :5.1.0
openstack version is : Liberty

4. When I shield _extend_if_required in _use_disk_image_as_linked_clone,
then will be successful.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528114

Title:
  vmware start instance from snapshot error

Status in OpenStack Compute (nova):
  New

Bug description:
  1. I take a snapshot from vmware instance, then the snapshot image(which is 
link_clone of snapshot) will be saved in glance server.
  2. Start from this snapshot image from glance server. then got the following 
error

  2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
 value = "task-1896"
 _type = "Task"
   }. _poll_task /usr/lib/python2.7/site-packages/oslo_vmware/api.py:397
  2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121

  2015-12-15 01:32:05.285 25992 DEBUG oslo_vmware.exceptions [-] Fault 
InvalidArgument not matched. get_fault_class 
/usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:296
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall [-] in 
fixed duration looping call
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, 
in _inner
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 428, in _poll_task
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall raise 
task_ex
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 
VimFaultException: 指定的参数错误。
  2015-12-15 01:32:05.285 25992 ERROR 

[Yahoo-eng-team] [Bug 1528003] [NEW] bgp-dragent-hosting-multiple-speakers

2015-12-20 Thread vikram.choudhary
Public bug reported:

[Existing problem]
The number of BGP speakers a BGP driver can host may vary. For instance Ryu can 
support only 1 BGP Speaker while Quagga can host multiple. In the current BGP 
dynamic routing implementation [1]_, the BGP DrAgent and DrScheduler cannot 
adjust themselves as per the driver support which might be required for 
effective scheduling.

[Proposal]
There could be 2 ways for achieving this:
1. Admin can hard code the support information in the configuration file and 
the same could be read by BGP DrAgent and DrScheduler during start-up.
2. New interface can be exposed by BGP DrAgent to DrScheduler using which 
DrScheduler retrieve this information during start-up.

[Benefits]
- Effective scheduling.

[What is the enhancement?]
- Configuration file changes. [Proposal-1]
- New interface between BGP DrAgent and DrScheduler will be designed. 
[Proposal-2]

[Related information]
[1] Dynamic Advertising Routes for Public Ranges

https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528003

Title:
  bgp-dragent-hosting-multiple-speakers

Status in neutron:
  New

Bug description:
  [Existing problem]
  The number of BGP speakers a BGP driver can host may vary. For instance Ryu 
can support only 1 BGP Speaker while Quagga can host multiple. In the current 
BGP dynamic routing implementation [1]_, the BGP DrAgent and DrScheduler cannot 
adjust themselves as per the driver support which might be required for 
effective scheduling.

  [Proposal]
  There could be 2 ways for achieving this:
  1. Admin can hard code the support information in the configuration file and 
the same could be read by BGP DrAgent and DrScheduler during start-up.
  2. New interface can be exposed by BGP DrAgent to DrScheduler using which 
DrScheduler retrieve this information during start-up.

  [Benefits]
  - Effective scheduling.

  [What is the enhancement?]
  - Configuration file changes. [Proposal-1]
  - New interface between BGP DrAgent and DrScheduler will be designed. 
[Proposal-2]

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268439] Re: range method is not same in py3.x and py2.x

2015-12-20 Thread Steve Martinelli
** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268439

Title:
  range method is not same in py3.x and py2.x

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Triaged
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in neutron:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-neutronclient:
  Invalid
Status in python-swiftclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  in py3.x,range is xrange in py2.x.
  in py3.x, if you want get a list,you must use:
  list(range(value))

  I review the code, find that many codes use range for  loop, if used py3.x 
environment,
  it will occure error.
  so we must modify this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276017] Re: move default rule to policy section in keystone.conf

2015-12-20 Thread Steve Martinelli
these are two disjoint entities. One is for oslo_policy (the default
rule), the other is for keystone's policy database (largely unused).

with the release of oslo.policy as it's own project, this relationship
had been made more clear, and no longer needs to be grouped together as
the initial bug report suggests.

** Changed in: keystone
   Status: In Progress => Won't Fix

** Changed in: keystone
 Assignee: Steve Martinelli (stevemar) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1276017

Title:
  move default rule to policy section in keystone.conf

Status in OpenStack Identity (keystone):
  Won't Fix
Status in oslo-incubator:
  Won't Fix

Bug description:
  The following is currently in the keystone.conf file:

  # Rule to check if no matching policy definition is found
  # FIXME(dolph): This should really be defined as [policy] default_rule
  # policy_default_rule = admin_required

  As the comment suggest, we should move the config option to the policy 
section.
  This will also impact oslo and we should ensure the old option is still 
supported for backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1276017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226132] Re: Keystone doesn't emit event notifications for domains

2015-12-20 Thread Steve Martinelli
this was fixed a while ago:

Notifications are now emitted upon domain create, update and delete

https://github.com/openstack/keystone/blob/master/keystone/resource/controllers.py#L132-L136

** Changed in: keystone
   Status: In Progress => Fix Committed

** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1226132

Title:
  Keystone doesn't emit event notifications for domains

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  According to the Keystone event notifications blueprint (
  https://blueprints.launchpad.net/keystone/+spec/notifications ),
  Keystone should use oslo.notify to emit notifications for its major
  resources (user/project/domain/role/...). But, at the current moment,
  Keystone is notifying just operations over users and projects. This
  may affect the Cloud Providers that need to notify operations over
  other resources, for example: creating or deleting a domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1226132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528041] [NEW] Inefficient use of db calls to get instance rules in virt/firewall.py

2015-12-20 Thread Hans Lindgren
Public bug reported:

When getting instance rules in virt/firewall.py a for loop is used to
query the db for rules belonging to each individual security group in a
list of security groups that itself comes from a separate query. See:

https://github.com/openstack/nova/blob/47e5199f67949f3cbd73114f4f45591cbc01bdd5/nova/virt/firewall.py#L349

This can be made much more efficient by querying all rules in a single
db query joined by instance.

** Affects: nova
 Importance: Medium
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: db security-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528041

Title:
  Inefficient use of db calls to get instance rules in virt/firewall.py

Status in OpenStack Compute (nova):
  New

Bug description:
  When getting instance rules in virt/firewall.py a for loop is used to
  query the db for rules belonging to each individual security group in
  a list of security groups that itself comes from a separate query.
  See:

  
https://github.com/openstack/nova/blob/47e5199f67949f3cbd73114f4f45591cbc01bdd5/nova/virt/firewall.py#L349

  This can be made much more efficient by querying all rules in a single
  db query joined by instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1528041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp