[Yahoo-eng-team] [Bug 1511231] [NEW] sytle is inconsistence bettwen last step and other steps of wrokflow

2015-10-28 Thread lumeihong
Public bug reported:

the template used is 'horizon/common/_workflow.html' , the inconsistences are 
as follows:
1) Do not fill in the required fields, an error message is displayed beyond 
field; 
2) error message is not marked in red; 

the finally reason is that,the style setting is error in
horizon.modals.js, such as:

// Add field errors.
  $field = $fieldset.find('[name="' + field + '"]');
  $field.closest('.form-group').addClass('error');
  $.each(errors, function (index, error) {
$field.before(
  '' +
  error + '');
  });

it maybe like this:
// Add field errors.
  $field = $fieldset.find('[name="' + field + '"]');
  $field.closest('.form-group').addClass('has-error');
  $.each(errors, function (index, error) {
$field.after(
  '' +
  error + '');
  });

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511231

Title:
  sytle is inconsistence bettwen last step and other steps of wrokflow

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  the template used is 'horizon/common/_workflow.html' , the inconsistences are 
as follows:
  1) Do not fill in the required fields, an error message is displayed beyond 
field; 
  2) error message is not marked in red; 

  the finally reason is that,the style setting is error in
  horizon.modals.js, such as:

  // Add field errors.
$field = $fieldset.find('[name="' + field + '"]');
$field.closest('.form-group').addClass('error');
$.each(errors, function (index, error) {
  $field.before(
'' +
error + '');
});

  it maybe like this:
  // Add field errors.
$field = $fieldset.find('[name="' + field + '"]');
$field.closest('.form-group').addClass('has-error');
$.each(errors, function (index, error) {
  $field.after(
'' +
error + '');
});

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510579] Re: [Firewall] Different behavior during creating firewall with empty router in CLI and Horizon

2015-10-28 Thread Cedric Brandily
This bug affects horizon or/and python-neutronclient BUT not neutron as
neutron does what you ask him to do!

Considering that python-neutronclient is the reference client for
neutron, the behavior of Horizon should be aligned to python-
neutronclient one.

** Tags removed: firewall neutron
** Tags added: sg-fw

** Changed in: neutron
   Status: Confirmed => Invalid

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510579

Title:
  [Firewall] Different behavior during creating firewall with empty
  router in CLI and Horizon

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Invalid

Bug description:
  
  Steps to reproduce:
  1) Create allow icmp rule
  2) Create policy with this rule
  3) Create firewall with the policy and empty router in Horizon
  Result: new firewall is in Inactive state
  4) Create new rulw
  5) Create new policy with this rule
  6) Create firewall in cli without router
  Result: new firewall is created with all routers in tenant in Active state

  We need the same reaction

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1510579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501152] Re: Firewall Rule can be created with IPv4 Source address and IPv6 Destination address

2015-10-28 Thread Reedip
** Changed in: neutron
 Assignee: Reedip (reedip-banerjee) => (unassigned)

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501152

Title:
  Firewall Rule can be created with IPv4 Source address and IPv6
  Destination address

Status in neutron:
  Invalid

Bug description:
  reedip@reedip-VirtualBox:/opt/stack/logs$ neutron firewall-rule-create 
--source-ip-address 1.1.1.1 --destination-ip-address 1::1 --protocol tcp 
--action allow
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address | 1::1 |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 92abcef7-56ac-4730-bf06-cde88a2b84e8 |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  | 1.1.1.1  |
  | source_port|  |
  | tenant_id  | f0e01e9a74684ed68e2f95565873c6fe |
  ++--+
  reedip@reedip-VirtualBox:/opt/stack/logs$

  
  Though the firewall rule creation is allowed, it causes the firewall itself 
to go into ERROR state.
  http://paste.openstack.org/show/474796/

  It is better not to allow the creation of the firewall rule itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507748] Re: Request to release 'networking-ale-omniswitch' sub-project as part of Liberty main release

2015-10-28 Thread Kyle Mestery
1.0.1 shows up on PyPI now:

https://pypi.python.org/pypi/networking-ale-omniswitch/1.0.1

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron
Milestone: None => mitaka-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507748

Title:
  Request to release 'networking-ale-omniswitch' sub-project as part of
  Liberty main release

Status in networking-ale-omniswitch:
  Confirmed
Status in neutron:
  Fix Released

Bug description:
  As per the release process of Neutron sub-project, this bug report is
  to request the Neutron release team to tag and release "networking-
  ale-omniswitch" sub-project along with Liberty main release.

  https://launchpad.net/networking-ale-omniswitch
  https://pypi.python.org/pypi/networking-ale-omniswitch

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ale-omniswitch/+bug/1507748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510817] Re: stable/liberty branch creation request for networking-midonet

2015-10-28 Thread Kyle Mestery
Done, branch created.

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Kyle Mestery (mestery)

** Changed in: neutron
Milestone: None => mitaka-1

** Changed in: neutron
   Status: Confirmed => Fix Committed

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510817

Title:
  stable/liberty branch creation request for networking-midonet

Status in networking-midonet:
  New
Status in neutron:
  Fix Released

Bug description:
  Please cut stable/liberty branch for networking-midonet
  on commit 3943328ffa6753a88b82ac163b3c1023eee4a884.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1510817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508786] Re: Request for Liberty release of networking-ofagent

2015-10-28 Thread Kyle Mestery
I created stable/liberty and released 1.0.2 to PyPI:

https://pypi.python.org/pypi/networking-ofagent/1.0.2

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508786

Title:
  Request for Liberty release of networking-ofagent

Status in networking-ofagent:
  New
Status in neutron:
  Fix Released

Bug description:
  This bug is to request the neutron-release team to tag and release for
  Liberty of networking-ofagent as Sub-Project Release Process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1508786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503088] Re: Deprecate max_fixed_ips_per_port

2015-10-28 Thread Nate Johnston
It was established in the first attempt that this is not a documentation
issue; the documentation is autogenerated from the neutron code.

** Project changed: openstack-manuals => neutron

** Changed in: neutron
 Assignee: Takanori Miyagishi (miyagishi-t) => Nate Johnston (nate-johnston)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503088

Title:
  Deprecate max_fixed_ips_per_port

Status in neutron:
  In Progress

Bug description:
  https://review.openstack.org/230696
  commit 37277cf4168260d5fa97f20e0b64a2efe2d989ad
  Author: Kevin Benton 
  Date:   Wed Sep 30 04:20:02 2015 -0700

  Deprecate max_fixed_ips_per_port
  
  This option does not have a clear use case since we prevent
  users from setting their own IP addresses on shared networks.
  
  DocImpact
  Change-Id: I211e87790c955ba5c3904ac27b177acb2847539d
  Closes-Bug: #1502356

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503088] [NEW] Deprecate max_fixed_ips_per_port

2015-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://review.openstack.org/230696
commit 37277cf4168260d5fa97f20e0b64a2efe2d989ad
Author: Kevin Benton 
Date:   Wed Sep 30 04:20:02 2015 -0700

Deprecate max_fixed_ips_per_port

This option does not have a clear use case since we prevent
users from setting their own IP addresses on shared networks.

DocImpact
Change-Id: I211e87790c955ba5c3904ac27b177acb2847539d
Closes-Bug: #1502356

** Affects: neutron
 Importance: Undecided
 Assignee: Takanori Miyagishi (miyagishi-t)
 Status: In Progress


** Tags: autogenerate-config-docs neutron
-- 
Deprecate max_fixed_ips_per_port
https://bugs.launchpad.net/bugs/1503088
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511198] [NEW] Decompose ovsvapp ml2 driver

2015-10-28 Thread Cedric Brandily
Public bug reported:

It's time to decompose ovsvapp ml2 driver[1] from neutron

[1] neutron.plugins.ml2.drivers.ovsvapp

** Affects: networking-vsphere
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New

** Changed in: networking-vsphere
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511198

Title:
  Decompose ovsvapp ml2 driver

Status in networking-vsphere:
  New
Status in neutron:
  New

Bug description:
  It's time to decompose ovsvapp ml2 driver[1] from neutron

  [1] neutron.plugins.ml2.drivers.ovsvapp

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-vsphere/+bug/1511198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296953] Re: --disable-snat on tenant router raises 404

2015-10-28 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1352907 ***
https://bugs.launchpad.net/bugs/1352907

More thinking about this bug. The inconsistency might be designed to be
this way according to the tempest. Allow user to create a basic
connection between router and external network, however, let admin to do
other things.

The original bug in the description is resolved by Bug #1352907, so mark
duplicated with it.

** This bug has been marked a duplicate of bug 1352907
   response of normal user update the "shared" property of network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296953

Title:
  --disable-snat on tenant router raises 404

Status in neutron:
  In Progress

Bug description:
  arosen@arosen-desktop:~/devstack$ neutron router-create aaa
  nCreated a new router:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | distributed   | False|
  | external_gateway_info |  |
  | id| add4d46b-5036-4a96-af7e-8ceb44f9ab3d |
  | name  | aaa  |
  | routes|  |
  | status| ACTIVE   |
  | tenant_id | 4ec9de7eae7445719e8f67f2f9d78aae |
  +---+--+
  arosen@arosen-desktop:~/devstack$ neutron router-gateway-set --disable-snat  
aaa public 
  The resource could not be found.

  
  2014-03-24 14:06:12.444 DEBUG neutron.policy 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] Failed policy check for 'update_router' from 
(pid=7068) enforce /opt/stack/neutron/neutron/policy.py:381
  2014-03-24 14:06:12.444 ERROR neutron.api.v2.resource 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] update failed
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 494, in update
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource 
  2014-03-24 14:06:12.445 INFO neutron.wsgi 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] 10.24.114.91 - - [24/Mar/2014 14:06:12] "PUT 
/v2.0/routers/add4d46b-5036-4a96-af7e-8ceb44f9ab3d.json HTTP/1.1" 404 248 
0.039626


  In the code we do:

  try:
  policy.enforce(request.context,
 action,
 orig_obj)
  except exceptions.PolicyNotAuthorized:
  # To avoid giving away information, pretend that it
  # doesn't exist
  msg = _('The resource could not be found.')
  raise webob.exc.HTTPNotFound(msg)   


  it would be nice if we were smarter about this an raise not authorized
  instead of not found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/liberty / master

2015-10-28 Thread Davanum Srinivas (DIMS)
fixed in https://review.openstack.org/#/c/238694/

** Changed in: oslo.service
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/liberty / master

Status in Cinder:
  Fix Released
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo.service:
  Fix Released

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo -> master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

  Note: this has returned in stable/liberty / master and oslo.service,
  see comment #50 for where this reemerges.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507050] Re: LBaaS 2.0: Operating Status Tempest Test Changes

2015-10-28 Thread min wang
** No longer affects: octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507050

Title:
  LBaaS 2.0: Operating Status Tempest Test Changes

Status in neutron:
  Confirmed

Bug description:
  SUMMARY:
  A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".   

  Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
  
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html

  
  LOGS/STACKTRACE:
  refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

  Captured traceback:
  2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most 
recent call last):
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' 
== load_balancer['operating_status']
  2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError

  
  RECOMMENDED ACTION:
  1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
  2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507050] Re: LBaaS 2.0: Operating Status Tempest Test Changes

2015-10-28 Thread min wang
This bug affects the octavia scenario test as well.

** Also affects: octavia
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507050

Title:
  LBaaS 2.0: Operating Status Tempest Test Changes

Status in neutron:
  Confirmed
Status in octavia:
  New

Bug description:
  SUMMARY:
  A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".   

  Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
  
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html

  
  LOGS/STACKTRACE:
  refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

  Captured traceback:
  2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most 
recent call last):
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' 
== load_balancer['operating_status']
  2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError

  
  RECOMMENDED ACTION:
  1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
  2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511134] [NEW] Batch DVR ARP updates

2015-10-28 Thread Rawlin Peters
Public bug reported:

The L3 agent currently issues ARP updates one at a time while processing
a DVR router. Each ARP update creates an external process which has to
call the neutron-rootwrap helper while also "ip netns exec " -ing each time.

The ip command contains a "-batch " option which would be able
to batch all of the "ip neigh replace" commands into one external
process per qrouter namespace. This would greatly reduce the amount of
time it takes the L3 agent to update large numbers of ARP entries,
particularly as the number of VMs in a deployment rises.

The benefit of batching ip commands can be seen in this simple bash
example:

$ time for i in {0..50}; do sudo ip netns exec qrouter-bc38451e-0c2f-
4ad2-b76b-daa84066fefb ip a > /dev/null; done

real0m2.437s
user0m0.183s
sys 0m0.359s
$ for i in {0..50}; do echo a >> /tmp/ip_batch_test; done
$ time sudo ip netns exec qrouter-bc38451e-0c2f-4ad2-b76b-daa84066fefb ip -b 
/tmp/ip_batch_test > /dev/null

real0m0.046s
user0m0.003s
sys 0m0.007s

If just 50 arp updates are batched together, there is about a 50x
speedup. Repeating this test with 500 commands showed a speedup of 250x
(disclaimer: this was a rudimentary test just to get a rough estimate of
the performance benefit).

** Affects: neutron
 Importance: Undecided
 Assignee: Rawlin Peters (rawlin-peters)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Rawlin Peters (rawlin-peters)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511134

Title:
  Batch DVR ARP updates

Status in neutron:
  In Progress

Bug description:
  The L3 agent currently issues ARP updates one at a time while
  processing a DVR router. Each ARP update creates an external process
  which has to call the neutron-rootwrap helper while also "ip netns
  exec " -ing each time.

  The ip command contains a "-batch " option which would be
  able to batch all of the "ip neigh replace" commands into one external
  process per qrouter namespace. This would greatly reduce the amount of
  time it takes the L3 agent to update large numbers of ARP entries,
  particularly as the number of VMs in a deployment rises.

  The benefit of batching ip commands can be seen in this simple bash
  example:

  $ time for i in {0..50}; do sudo ip netns exec qrouter-bc38451e-0c2f-
  4ad2-b76b-daa84066fefb ip a > /dev/null; done

  real  0m2.437s
  user0m0.183s
  sys   0m0.359s
  $ for i in {0..50}; do echo a >> /tmp/ip_batch_test; done
  $ time sudo ip netns exec qrouter-bc38451e-0c2f-4ad2-b76b-daa84066fefb ip -b 
/tmp/ip_batch_test > /dev/null

  real  0m0.046s
  user0m0.003s
  sys   0m0.007s

  If just 50 arp updates are batched together, there is about a 50x
  speedup. Repeating this test with 500 commands showed a speedup of
  250x (disclaimer: this was a rudimentary test just to get a rough
  estimate of the performance benefit).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268480] Re: assertTrue(isinstance()) in tests should be replace with assertIsInstance()

2015-10-28 Thread Bertrand Lallau
** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) => Bertrand Lallau (bertrand-lallau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268480

Title:
  assertTrue(isinstance()) in tests should be replace with
  assertIsInstance()

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Opinion
Status in Glance:
  Fix Released
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-novaclient:
  Fix Released

Bug description:
  some of tests use different method of assertTrue(isinstance(A, B)) or
  assertEqual(type(A), B). The correct way is to use assertIsInstance(A,
  B) provided by testtools

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1268480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511123] [NEW] Nova unrescue doesn't cleanup ceph rescue images

2015-10-28 Thread Tyler Wilson
Public bug reported:

It seems on a standard nova/ceph deployment an unrescue command doesn't
clean up the .rescue RBD disk created for the rescue environment.

$ rbd ls compute | grep e9f2e3f4-f095-45b4-b5f2-1b34d3776191
e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk
$ nova rescue e9f2e3f4-f095-45b4-b5f2-1b34d3776191
+---+--+
| Property  | Value|
+---+--+
| adminPass | hXXDHUqNRn7u |
+---+--+
$ rbd ls compute | grep e9f2e3f4-f095-45b4-b5f2-1b34d3776191
e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk
e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk.rescue
$ nova unrescue e9f2e3f4-f095-45b4-b5f2-1b34d3776191
$ rbd ls compute | grep e9f2e3f4-f095-45b4-b5f2-1b34d3776191
e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk
e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk.rescue

This was tested on 2015.1.1-1.el7

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1511123

Title:
  Nova unrescue doesn't cleanup ceph rescue images

Status in OpenStack Compute (nova):
  New

Bug description:
  It seems on a standard nova/ceph deployment an unrescue command
  doesn't clean up the .rescue RBD disk created for the rescue
  environment.

  $ rbd ls compute | grep e9f2e3f4-f095-45b4-b5f2-1b34d3776191
  e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk
  $ nova rescue e9f2e3f4-f095-45b4-b5f2-1b34d3776191
  +---+--+
  | Property  | Value|
  +---+--+
  | adminPass | hXXDHUqNRn7u |
  +---+--+
  $ rbd ls compute | grep e9f2e3f4-f095-45b4-b5f2-1b34d3776191
  e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk
  e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk.rescue
  $ nova unrescue e9f2e3f4-f095-45b4-b5f2-1b34d3776191
  $ rbd ls compute | grep e9f2e3f4-f095-45b4-b5f2-1b34d3776191
  e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk
  e9f2e3f4-f095-45b4-b5f2-1b34d3776191_disk.rescue

  This was tested on 2015.1.1-1.el7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1511123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511114] [NEW] hypervisor support matrix is incorrect for set admin password and libvirt in liberty+

2015-10-28 Thread Matt Riedemann
Public bug reported:

Blueprint https://blueprints.launchpad.net/nova/+spec/libvirt-set-admin-
password was implemented for libvirt in the liberty release of nova and
according to the spec and code, libvirt 1.2.16 is required to use it:

http://specs.openstack.org/openstack/nova-
specs/specs/liberty/implemented/libvirt-set-admin-password.html

https://review.openstack.org/#/c/185910/

The hypervisor support matrix in nova still lists the set admin password
operation as missing for libvirt though, which is not entirely true, but
it has to be noted that libvirt 1.2.16 is required to use it.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: documentation libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/154

Title:
  hypervisor support matrix is incorrect for set admin password and
  libvirt in liberty+

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Blueprint https://blueprints.launchpad.net/nova/+spec/libvirt-set-
  admin-password was implemented for libvirt in the liberty release of
  nova and according to the spec and code, libvirt 1.2.16 is required to
  use it:

  http://specs.openstack.org/openstack/nova-
  specs/specs/liberty/implemented/libvirt-set-admin-password.html

  https://review.openstack.org/#/c/185910/

  The hypervisor support matrix in nova still lists the set admin
  password operation as missing for libvirt though, which is not
  entirely true, but it has to be noted that libvirt 1.2.16 is required
  to use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511109] [NEW] Python Tests are failing on Horizon because of incomplete mocking

2015-10-28 Thread Rajat Vig
Public bug reported:

openstack_dashboard.dashboards.project.instances.tests.InstanceTests are
failing as the calls to flavor_list are not mocked on Nova.

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511109

Title:
  Python Tests are failing on Horizon because of incomplete mocking

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  openstack_dashboard.dashboards.project.instances.tests.InstanceTests
  are failing as the calls to flavor_list are not mocked on Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511094] [NEW] REST API version history docs has wrong os-extended-volumes note for v2.3

2015-10-28 Thread Matt Riedemann
Public bug reported:

In:

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/rest_api_version_history.rst#n47

it says:

"Exposed ``delete_on_termination`` for ``attached_volumes`` in ``os-
extended-volumes``."

But the actual key to that dict is 'volumes_attached':

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/extended_volumes.py#n41

So the doc should be updated.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: api documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1511094

Title:
  REST API version history docs has wrong os-extended-volumes note for
  v2.3

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/rest_api_version_history.rst#n47

  it says:

  "Exposed ``delete_on_termination`` for ``attached_volumes`` in ``os-
  extended-volumes``."

  But the actual key to that dict is 'volumes_attached':

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/extended_volumes.py#n41

  So the doc should be updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1511094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511061] Re: Images in inconsistent state when calls to registry fail during image deletion

2015-10-28 Thread nikhil komawar
1. I agree, the image deletion operation should be atomic.

2. Image data left behind means there is a potential risk of filling up
storage quota and resulting into a DoS; be mindful that it's a denial of
service but NOT a exploit as it is dependent on the operators' failure
scenarios of g-api <-> reg communication.

3. The original description has information related to failure scenarios
for only v1. So, a check is needed for the v2 as applicable.

** Description changed:

  [0] shows a sample image that was left in an inconsistent state when a
  call to registry failed during image deletion.
  
- Glance API makes two registry calls when deleting an image.
+ Glance v1 API makes two registry calls when deleting an image.
  The first call [1] is made to to set the status of an image to 
deleted/pending_delete.
  And, the other [2], to delete the rest of the metadata, which sets 
'deleted_at' and 'deleted' fields in the db.
  
  If the first call fails, the image deletion request fails and the image is 
left intact in it's previous status.
  However, if the first call succeeds and the second one fails, the image is 
left in an inconsistent status where it's status is set to 
pending_delete/deleted but it's 'deleted_at' and 'deleted' fields are not set.
  
  If delayed delete is turned on, these images are never collected by the 
scrubber as they won't appear as deleted images because their deleted field is 
not set. So, these images will continue to occupy storage in the backend.
  Also, further attempts at deleting these images will fail with a 404 because 
the status is already set to pending_delete/deleted.
  
  [0] http://paste.openstack.org/show/477577/
  [1]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1115-L1116
  [2]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1132

** Changed in: glance
   Status: New => Triaged

** Changed in: glance
   Importance: Undecided => Critical

** Changed in: glance
Milestone: None => mitaka-1

** Changed in: glance
 Assignee: (unassigned) => Hemanth Makkapati (hemanth-makkapati)

** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Also affects: glance/liberty
   Importance: Undecided
   Status: New

** Information type changed from Public to Public Security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1511061

Title:
  Images in inconsistent state when calls to registry fail during image
  deletion

Status in Glance:
  Triaged
Status in Glance juno series:
  New
Status in Glance kilo series:
  New
Status in Glance liberty series:
  New

Bug description:
  [0] shows a sample image that was left in an inconsistent state when a
  call to registry failed during image deletion.

  Glance v1 API makes two registry calls when deleting an image.
  The first call [1] is made to to set the status of an image to 
deleted/pending_delete.
  And, the other [2], to delete the rest of the metadata, which sets 
'deleted_at' and 'deleted' fields in the db.

  If the first call fails, the image deletion request fails and the image is 
left intact in it's previous status.
  However, if the first call succeeds and the second one fails, the image is 
left in an inconsistent status where it's status is set to 
pending_delete/deleted but it's 'deleted_at' and 'deleted' fields are not set.

  If delayed delete is turned on, these images are never collected by the 
scrubber as they won't appear as deleted images because their deleted field is 
not set. So, these images will continue to occupy storage in the backend.
  Also, further attempts at deleting these images will fail with a 404 because 
the status is already set to pending_delete/deleted.

  [0] http://paste.openstack.org/show/477577/
  [1]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1115-L1116
  [2]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1132

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1511061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511061] [NEW] Images in inconsistent state when calls to registry fail during image deletion

2015-10-28 Thread Hemanth Makkapati
Public bug reported:

[0] shows a sample image that was left in an inconsistent state when a
call to registry failed during image deletion.

Glance v1 API makes two registry calls when deleting an image.
The first call [1] is made to to set the status of an image to 
deleted/pending_delete.
And, the other [2], to delete the rest of the metadata, which sets 'deleted_at' 
and 'deleted' fields in the db.

If the first call fails, the image deletion request fails and the image is left 
intact in it's previous status.
However, if the first call succeeds and the second one fails, the image is left 
in an inconsistent status where it's status is set to pending_delete/deleted 
but it's 'deleted_at' and 'deleted' fields are not set.

If delayed delete is turned on, these images are never collected by the 
scrubber as they won't appear as deleted images because their deleted field is 
not set. So, these images will continue to occupy storage in the backend.
Also, further attempts at deleting these images will fail with a 404 because 
the status is already set to pending_delete/deleted.

[0] http://paste.openstack.org/show/477577/
[1]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1115-L1116
[2]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1132

** Affects: glance
 Importance: Critical
 Assignee: Hemanth Makkapati (hemanth-makkapati)
 Status: Triaged

** Affects: glance/juno
 Importance: Undecided
 Status: New

** Affects: glance/kilo
 Importance: Undecided
 Status: New

** Affects: glance/liberty
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1511061

Title:
  Images in inconsistent state when calls to registry fail during image
  deletion

Status in Glance:
  Triaged
Status in Glance juno series:
  New
Status in Glance kilo series:
  New
Status in Glance liberty series:
  New

Bug description:
  [0] shows a sample image that was left in an inconsistent state when a
  call to registry failed during image deletion.

  Glance v1 API makes two registry calls when deleting an image.
  The first call [1] is made to to set the status of an image to 
deleted/pending_delete.
  And, the other [2], to delete the rest of the metadata, which sets 
'deleted_at' and 'deleted' fields in the db.

  If the first call fails, the image deletion request fails and the image is 
left intact in it's previous status.
  However, if the first call succeeds and the second one fails, the image is 
left in an inconsistent status where it's status is set to 
pending_delete/deleted but it's 'deleted_at' and 'deleted' fields are not set.

  If delayed delete is turned on, these images are never collected by the 
scrubber as they won't appear as deleted images because their deleted field is 
not set. So, these images will continue to occupy storage in the backend.
  Also, further attempts at deleting these images will fail with a 404 because 
the status is already set to pending_delete/deleted.

  [0] http://paste.openstack.org/show/477577/
  [1]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1115-L1116
  [2]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1132

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1511061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511036] [NEW] router status values not translated

2015-10-28 Thread Tony Dunbar
Public bug reported:

I'm using Doug's TVT system and running though his TVT test plan with
pseudo German.

Under Network->Network Topology when you click on a Router the status
values shown for the router's interfaces are not translated, screen shot
attached.

** Affects: horizon
 Importance: Undecided
 Assignee: Tony Dunbar (adunbar)
 Status: In Progress


** Tags: i18n

** Attachment added: "router.jpg"
   https://bugs.launchpad.net/bugs/1511036/+attachment/4507711/+files/router.jpg

** Changed in: horizon
 Assignee: (unassigned) => Tony Dunbar (adunbar)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511036

Title:
  router status values not translated

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  I'm using Doug's TVT system and running though his TVT test plan with
  pseudo German.

  Under Network->Network Topology when you click on a Router the status
  values shown for the router's interfaces are not translated, screen
  shot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511025] [NEW] Image download with multi tenant true fails

2015-10-28 Thread Niall Bunting
Public bug reported:

Overview:
Trying to download an image while using multi tenant fails.

How to reproduce:
In the glance-api.conf set swift_store_multi_tenant = True.

Then upload an image
glance --os-image-api-version 1 image-create --name test --copy-from 
http://127.0.0.1:5321 --container-format bare --disk-format raw

Download image
glance image-download 965afb71-61f7-4834-b62b-6fc6e3a1d381 --file /tmp/file

Output:
'NoneType' object has no attribute 'close'
The server: http://paste.openstack.org/show/477571/

Expected:
Image to be downloaded.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1511025

Title:
  Image download with multi tenant true fails

Status in Glance:
  New

Bug description:
  Overview:
  Trying to download an image while using multi tenant fails.

  How to reproduce:
  In the glance-api.conf set swift_store_multi_tenant = True.

  Then upload an image
  glance --os-image-api-version 1 image-create --name test --copy-from 
http://127.0.0.1:5321 --container-format bare --disk-format raw

  Download image
  glance image-download 965afb71-61f7-4834-b62b-6fc6e3a1d381 --file /tmp/file

  Output:
  'NoneType' object has no attribute 'close'
  The server: http://paste.openstack.org/show/477571/

  Expected:
  Image to be downloaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1511025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511013] [NEW] Browse button and "No file selected" not translated

2015-10-28 Thread Tony Dunbar
Public bug reported:

I'm using Doug's TVT system and running though his TVT test plan with
pseudo German.

Under Projects->Compute->Instances->Launch Instance, on the Post
creation tab, when the File drop down is selected from the customization
script source, the Browse button and message "No file selected" are not
translated, screen shot attached.

** Affects: horizon
 Importance: Undecided
 Assignee: Tony Dunbar (adunbar)
 Status: In Progress


** Tags: i18n

** Attachment added: "browse.jpg"
   https://bugs.launchpad.net/bugs/1511013/+attachment/4507679/+files/browse.jpg

** Changed in: horizon
 Assignee: (unassigned) => Tony Dunbar (adunbar)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511013

Title:
  Browse button and "No file selected" not translated

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  I'm using Doug's TVT system and running though his TVT test plan with
  pseudo German.

  Under Projects->Compute->Instances->Launch Instance, on the Post
  creation tab, when the File drop down is selected from the
  customization script source, the Browse button and message "No file
  selected" are not translated, screen shot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511009] [NEW] "Please enter a number" message not translated

2015-10-28 Thread Tony Dunbar
Public bug reported:

I'm using Doug's TVT system and running though his TVT test plan with
pseudo German.

Under Projects->Compute->Instances->Launch Instance, on the DSetails
tab, when alpha characters are input in a numeric field, the message
"Please enter a number" is not translated, screen shot attached.

Although this is the same message as reported in
https://bugs.launchpad.net/horizon/+bug/1510286, this may be a different
bug since it is coming from a different page. If it turns out to be the
same root cause, we can dup this bug.

** Affects: horizon
 Importance: Undecided
 Assignee: Tony Dunbar (adunbar)
 Status: In Progress


** Tags: i18n

** Attachment added: "number3.jpg"
   
https://bugs.launchpad.net/bugs/1511009/+attachment/4507663/+files/number3.jpg

** Changed in: horizon
 Assignee: (unassigned) => Tony Dunbar (adunbar)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511009

Title:
  "Please enter a number" message not translated

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  I'm using Doug's TVT system and running though his TVT test plan with
  pseudo German.

  Under Projects->Compute->Instances->Launch Instance, on the DSetails
  tab, when alpha characters are input in a numeric field, the message
  "Please enter a number" is not translated, screen shot attached.

  Although this is the same message as reported in
  https://bugs.launchpad.net/horizon/+bug/1510286, this may be a
  different bug since it is coming from a different page. If it turns
  out to be the same root cause, we can dup this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511005] [NEW] neutron-debug doesn't recognize stevedore alias for interface_driver

2015-10-28 Thread YAMAMOTO Takashi
Public bug reported:

unlike other agents, neutron-debug doesn't recognize stevedore alias for
interface_driver configuration.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511005

Title:
  neutron-debug doesn't recognize stevedore alias for interface_driver

Status in neutron:
  In Progress

Bug description:
  unlike other agents, neutron-debug doesn't recognize stevedore alias
  for interface_driver configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511004] [NEW] _process_router_update can return without logging said return

2015-10-28 Thread Ryan Moats
Public bug reported:

In the L3 agent, _process_router_update process logs the start of each router 
update.
The execution branches that call process_prefix_update and _safe_router_removed 
immediately
continue without logging that the router update is finished.  This leads to 
confusion about when
processing of the last router stops.

** Affects: neutron
 Importance: Low
 Assignee: Ryan Moats (rmoats)
 Status: In Progress


** Tags: kilo-backport-potential liberty-backport-potential logging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511004

Title:
  _process_router_update can return without logging said return

Status in neutron:
  In Progress

Bug description:
  In the L3 agent, _process_router_update process logs the start of each router 
update.
  The execution branches that call process_prefix_update and 
_safe_router_removed immediately
  continue without logging that the router update is finished.  This leads to 
confusion about when
  processing of the last router stops.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510979] [NEW] Instance reschedule failure leaves orphaned neutron ports

2015-10-28 Thread Boden R
Public bug reported:

During the instance boot (spawn/run) process, neutron ports are
allocated for the instance if necessary. If the instance fails to spawn
(say as a result of a compute host failure), the default behavior is to
reschedule the instance and leave it's networking resources in-tact for
potential reuse on the rescheduled host (as per
deallocate_networks_on_reschedule() [1] which returns False for most
compute drivers).

All is good if the instance is successfully rescheduled, but if the
reschedule fails (say no more applicable hosts) the allocated ports are
left as-is and effectively orphaned.

There are some related defects ([2] and [3]), but they don't quite touch
on the particular behavior described herein.

Obviously there are a number of ways to address this issue, but the most
obvious is perhaps nova should be aware of the reschedule failure and
deallocate any resources which may have been left in-tact for the
reschedule.

I'm running devstack all-in-one setup from openstack master branches.

nova --version
2.32.0
neutron --version
3.1.0

The easiest way to repo is to use an all-in-one devstack (only 1 compute
host) simulate a host spawn failure by editing the spwan() method of
your compute driver to raise an exception at the end of the method and
simply try to boot a server. In this setup there's only 1 host so the
reschedule will fail and you can verify the port allocated for the
instance still exists after trying to boot the instance.


[1] https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L1273
[2] https://bugs.launchpad.net/nova/+bug/1410739
[3] https://bugs.launchpad.net/nova/+bug/1327124

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510979

Title:
  Instance reschedule failure leaves orphaned neutron ports

Status in OpenStack Compute (nova):
  New

Bug description:
  During the instance boot (spawn/run) process, neutron ports are
  allocated for the instance if necessary. If the instance fails to
  spawn (say as a result of a compute host failure), the default
  behavior is to reschedule the instance and leave it's networking
  resources in-tact for potential reuse on the rescheduled host (as per
  deallocate_networks_on_reschedule() [1] which returns False for most
  compute drivers).

  All is good if the instance is successfully rescheduled, but if the
  reschedule fails (say no more applicable hosts) the allocated ports
  are left as-is and effectively orphaned.

  There are some related defects ([2] and [3]), but they don't quite
  touch on the particular behavior described herein.

  Obviously there are a number of ways to address this issue, but the
  most obvious is perhaps nova should be aware of the reschedule failure
  and deallocate any resources which may have been left in-tact for the
  reschedule.

  I'm running devstack all-in-one setup from openstack master branches.

  nova --version
  2.32.0
  neutron --version
  3.1.0

  The easiest way to repo is to use an all-in-one devstack (only 1
  compute host) simulate a host spawn failure by editing the spwan()
  method of your compute driver to raise an exception at the end of the
  method and simply try to boot a server. In this setup there's only 1
  host so the reschedule will fail and you can verify the port allocated
  for the instance still exists after trying to boot the instance.

  
  [1] https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L1273
  [2] https://bugs.launchpad.net/nova/+bug/1410739
  [3] https://bugs.launchpad.net/nova/+bug/1327124

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1510979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510411] Re: neutron-sriov-nic-agent raises UnsupportedVersion security_groups_provider_updated

2015-10-28 Thread Rui Zang
** Project changed: networking-nec => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510411

Title:
  neutron-sriov-nic-agent raises UnsupportedVersion
  security_groups_provider_updated

Status in neutron:
  New

Bug description:
  neutron-sriov-nic-agent raises following exception:
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 1.3. 
Attempted method: security_groups_provider_updated
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _dispatch
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 1.3. Attempted 
method: security_groups_provider_updated
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher

  This VM is build with SRIOV port(macvtap). 
   jenkins@cnt-14:~$ sudo virsh list --all
   IdName   State
  
   10instance-0003  paused

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510411] [NEW] neutron-sriov-nic-agent raises UnsupportedVersion security_groups_provider_updated

2015-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

neutron-sriov-nic-agent raises following exception:
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 1.3. 
Attempted method: security_groups_provider_updated
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _dispatch
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 1.3. Attempted 
method: security_groups_provider_updated
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher

This VM is build with SRIOV port(macvtap). 
 jenkins@cnt-14:~$ sudo virsh list --all
 IdName   State

 10instance-0003  paused

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
neutron-sriov-nic-agent raises UnsupportedVersion 
security_groups_provider_updated
https://bugs.launchpad.net/bugs/1510411
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510845] [NEW] non-availability of api for bulk port updates

2015-10-28 Thread Krishna Kanth
Public bug reported:

Currently there is no api available for bulk port updates, which takes a list 
of ports .
Having one will help in processing port-updates quickly as per the specific 
driver implementation.

** Affects: neutron
 Importance: Undecided
 Assignee: Krishna Kanth (krishna-kanth-mallela)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Krishna Kanth (krishna-kanth-mallela)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510845

Title:
  non-availability of api for bulk port updates

Status in neutron:
  New

Bug description:
  Currently there is no api available for bulk port updates, which takes a list 
of ports .
  Having one will help in processing port-updates quickly as per the specific 
driver implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510814] Re: There are some url in neutron gerrit dashboards redirect to an new url

2015-10-28 Thread venkatamahesh
** Project changed: openstack-manuals => neutron

** Changed in: neutron
   Status: New => Confirmed

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510814

Title:
  There are some url in neutron gerrit dashboards redirect to an new url

Status in neutron:
  Confirmed

Bug description:
  There are some url in neutron gerrit dashboards redirect to an new
  url,it's not a big problem but i think it need be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510814] [NEW] There are some url in neutron gerrit dashboards redirect to an new url

2015-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There are some url in neutron gerrit dashboards redirect to an new
url,it's not a big problem but i think it need be corrected.

** Affects: neutron
 Importance: Undecided
 Assignee: IanSun (sun-jun)
 Status: New

-- 
There are some url in neutron gerrit dashboards redirect to an new url
https://bugs.launchpad.net/bugs/1510814
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503862] Re: VPNaaS: Enhance error checking on subnet changes

2015-10-28 Thread Paul Michali
Found out that the CIDR for a subnet is read-only, so we don't have to
block changes, when the subnet is used by VPNaaS.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503862

Title:
  VPNaaS: Enhance error checking on subnet changes

Status in neutron:
  Invalid

Bug description:
  Currently, if the CIDR of a subnet changes, and that subnet is used by
  VPN, there is no checking performed.

  Should add a notification for subnet CIDR changes and either block the
  change, if in use by VPN service/endpoint group, or to cause a sync
  operation in VPN so that existing connections are updated (if
  possible).

  I'm not sure which would be better. Need to ensure that we don't
  disrupt any existing IPSec connections that have not changed.

  Need to ensure this supports the new endpoint group capability for
  VPNaaS, where local subnets are specified in endpoint groups (versus
  the older method of a sole subnet being associated with a VPN
  service).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510817] [NEW] stable/liberty branch creation request for networking-midonet

2015-10-28 Thread YAMAMOTO Takashi
Public bug reported:

Please cut stable/liberty branch for networking-midonet
on commit 3943328ffa6753a88b82ac163b3c1023eee4a884.

** Affects: networking-midonet
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: release-subproject

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510817

Title:
  stable/liberty branch creation request for networking-midonet

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  Please cut stable/liberty branch for networking-midonet
  on commit 3943328ffa6753a88b82ac163b3c1023eee4a884.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1510817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp