[Yahoo-eng-team] [Bug 1541191] Re: Limit to create only one external network

2016-02-03 Thread Eugene Nikanorov
Multiple external networks are supported.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541191

Title:
  Limit to create only one external network

Status in neutron:
  Invalid

Bug description:
  Since L3 agent support only one external network, 
  
https://github.com/openstack/neutron/blob/master/neutron/db/external_net_db.py#L149

  But now, we can create any number of external network that will cause to 
raise TooManyExternalNetworks exception in agent rpc get_external_network_id(). 
 
  We should modify create_network api  to limit only one external_network could 
be create.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541293] [NEW] Delete Subnet button on the Subnet Details page is not working

2016-02-03 Thread abdul NIzamudin
Public bug reported:

When you try to delete Subnet from the Subnet Details page. It doesn't
delete the subnet, neither does it show any error message. It simply
redirects to Network Index page.

** Affects: neutron
 Importance: Undecided
 Assignee: Sandeep (sandeep-makhija)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541293

Title:
  Delete Subnet button on the Subnet Details page is not working

Status in neutron:
  In Progress

Bug description:
  When you try to delete Subnet from the Subnet Details page. It doesn't
  delete the subnet, neither does it show any error message. It simply
  redirects to Network Index page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541303] [NEW] Increase quota-related unit-tests coverage

2016-02-03 Thread Paul Karikh
Public bug reported:

Currehnt unit-tests do not catch regressions, introduced with this patch: 
https://review.openstack.org/#/c/215277/
We need to add more unit-tests to be able to catch such cases. As a minimal 
requirement, these new tests should catch mentioned regression and block the 
patch merging.

** Affects: horizon
 Importance: Medium
 Assignee: Paul Karikh (pkarikh)
 Status: Triaged

** Changed in: horizon
 Assignee: (unassigned) => Paul Karikh (pkarikh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541303

Title:
  Increase quota-related unit-tests coverage

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  Currehnt unit-tests do not catch regressions, introduced with this patch: 
https://review.openstack.org/#/c/215277/
  We need to add more unit-tests to be able to catch such cases. As a minimal 
requirement, these new tests should catch mentioned regression and block the 
patch merging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541301] [NEW] random allocation of floating ip from different external subnet

2016-02-03 Thread priya
Public bug reported:

I have created THREE different subnets(subnet1, subnet2,subnet3) in
"EXTERNAL" Network.

When associating floating ip for an instance, neutron randomly picks IPs
from all the subnets.

But I need to mention specified EXTERNAL subnet while associating.

As of now only external network name can be selected, but the subnets
can not be user defined. when we try to associate floating ip to the
instance.

To excecute an important use case we are stuck here in allocating
required FLOATING IP.

Kindly resolve.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541301

Title:
  random allocation of floating ip from different external subnet

Status in neutron:
  New

Bug description:
  I have created THREE different subnets(subnet1, subnet2,subnet3) in
  "EXTERNAL" Network.

  When associating floating ip for an instance, neutron randomly picks
  IPs from all the subnets.

  But I need to mention specified EXTERNAL subnet while associating.

  As of now only external network name can be selected, but the subnets
  can not be user defined. when we try to associate floating ip to the
  instance.

  To excecute an important use case we are stuck here in allocating
  required FLOATING IP.

  Kindly resolve.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541238] Re: No response message after we delete an image

2016-02-03 Thread Stuart McLaren
> We have to perform "glance image-list" to check it is deleted or not

Not really. If a failure occurs the command will 1) exit with a non-zero
exit status, 2) print an error

If the return status to the operating system is 0, the delete has
succeeded.

This is fairly typical for the UNIX command line:

http://www.catb.org/esr/writings/taoup/html/ch01s06.html

 Rule of Silence: When a program has nothing surprising to say, it
should say nothing.

 $ glance image-delete f05e145a-2444-41f4-a9ac-14bb52bbe0bc
 $ echo $?
 0

 $ glance image-delete 60821f66-1e10-4754-9c02-567a2d753957
 You are not permitted to delete the image 
'60821f66-1e10-4754-9c02-567a2d753957'.
 $ echo $?
 1


** Changed in: glance
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1541238

Title:
  No response message after we delete an image

Status in Glance:
  Opinion

Bug description:
  When an image is deleted there is no message on successful deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1541238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541293] Re: Delete Subnet button on the Subnet Details page is not working

2016-02-03 Thread abdul NIzamudin
** Project changed: neutron => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541293

Title:
  Delete Subnet button on the Subnet Details page is not working

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  When you try to delete Subnet from the Subnet Details page. It doesn't
  delete the subnet, neither does it show any error message. It simply
  redirects to Network Index page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541238] Re: No response message after we delete an image

2016-02-03 Thread M V P Nitesh
** Changed in: glance
   Status: Opinion => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1541238

Title:
  No response message after we delete an image

Status in Glance:
  In Progress

Bug description:
  When an image is deleted there is no message on successful deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1541238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512207] Re: Fix usage of assertions

2016-02-03 Thread Gábor Antal
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1512207

Title:
  Fix usage of assertions

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  In Progress
Status in Cinder:
  Invalid
Status in congress:
  Fix Released
Status in Cue:
  Fix Released
Status in Glance:
  Won't Fix
Status in Group Based Policy:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in Ironic Inspector:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kuryr:
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Health:
  In Progress
Status in os-brick:
  Fix Released
Status in os-net-config:
  In Progress
Status in os-testr:
  In Progress
Status in oslo.cache:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in oslo.utils:
  In Progress
Status in Packstack:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released
Status in Rally:
  Fix Released
Status in refstack:
  In Progress
Status in requests-mock:
  In Progress
Status in Sahara:
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in Vitrage:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  Manila  should use the specific assertion:

self.assertIsTrue/False(observed)

  instead of the generic assertion:

self.assertEqual(True/False, observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1512207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541348] [NEW] Regression in routers auto scheduling logic

2016-02-03 Thread Oleg Bondarev
Public bug reported:

Routers auto scheduling works when an l3 agent starst and performs a full sync 
with neutron server . Neutron server looks for all unscheduled routers (non-dvr 
routers only) and schedules them to that agent if applicable.
This was broken by commit 0e97feb0f30bc0ef6f4fe041cb41b7aa81042263 which 
changed full sync logic a bit: now l3 agent requests all ids of routers 
scheduled to it first. get_router_ids() didn't call routers auto scheduling 
which caused the regression.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-ipam-dhcp liberty-backport-potential

** Summary changed:

- regression in routers auto scheduling logic
+ Regression in routers auto scheduling logic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541348

Title:
  Regression in routers auto scheduling logic

Status in neutron:
  New

Bug description:
  Routers auto scheduling works when an l3 agent starst and performs a full 
sync with neutron server . Neutron server looks for all unscheduled routers 
(non-dvr routers only) and schedules them to that agent if applicable.
  This was broken by commit 0e97feb0f30bc0ef6f4fe041cb41b7aa81042263 which 
changed full sync logic a bit: now l3 agent requests all ids of routers 
scheduled to it first. get_router_ids() didn't call routers auto scheduling 
which caused the regression.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541238] Re: No response message after we delete an image

2016-02-03 Thread Stuart McLaren
Hi,

"Fix Comitted" status is for when a code change has merged into the git
repository.

I've marked this as "Opinion" which I think is the most appropriate at
this point.

Note that any new changes should always be targeted at the 'master'
branch first.

** Changed in: glance
   Status: Fix Committed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1541238

Title:
  No response message after we delete an image

Status in Glance:
  Opinion

Bug description:
  When an image is deleted there is no message on successful deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1541238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528114] Re: vmware start instance from snapshot error

2016-02-03 Thread Andrey Kurilin
Merged-Fix: https://review.openstack.org/#/c/168024/

** Changed in: nova
   Status: New => Fix Released

** Changed in: nova
 Assignee: (unassigned) => Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528114

Title:
  vmware start instance from snapshot error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. I take a snapshot from vmware instance, then the snapshot image(which is 
link_clone of snapshot) will be saved in glance server.
  2. Start from this snapshot image from glance server. then got the following 
error

  2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
 value = "task-1896"
 _type = "Task"
   }. _poll_task /usr/lib/python2.7/site-packages/oslo_vmware/api.py:397
  2015-12-15 01:32:05.255 25992 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121

  2015-12-15 01:32:05.285 25992 DEBUG oslo_vmware.exceptions [-] Fault 
InvalidArgument not matched. get_fault_class 
/usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:296
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall [-] in 
fixed duration looping call
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 76, 
in _inner
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 428, in _poll_task
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall raise 
task_ex
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall 
VimFaultException: 指定的参数错误。
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall capacity
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall Faults: 
['InvalidArgument']
  2015-12-15 01:32:05.285 25992 ERROR oslo_vmware.common.loopingcall
  2015-12-15 01:32:05.286 25992 ERROR nova.virt.vmwareapi.vmops 
[req-c466c53c-0a9c-45d7-aa78-c8812b4021a2 4b7fde8604c24e919e46b68fdf50b5a5 
b0eab665ecd94e86885e03027ab90528 - - -] [instance: 
bea53465-ac4f-40f4-9937-f99024a8075d] Extending virtual disk failed with error: 
指定的参数错误。
  capacity

  3. I track the error, 
  nova/virt/vmwareapi/vmops.py
   def spawn()
  self._use_disk_image_as_linked_clone(vm_ref, vi)
self._extend_if_required
  self._extend_virtual_disk() 
  def _extend_virtual_disk()
  vmdk_extend_task = self._session._call_method(
  self._session.vim,
  "ExtendVirtualDisk_Task",
  service_content.virtualDiskManager,
  name=name,
  datacenter=dc_ref,
  newCapacityKb=requested_size,
  eagerZero=False)
  my vimserver is :
  /opt/stack/vmware/wsdl/5.0/vimService.wsdl
  vcenter version is :5.1.0
  openstack version is : Liberty

  4. When I shield _extend_if_required in
  _use_disk_image_as_linked_clone, then will be successful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1528114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541406] [NEW] IPv6 prefix delegation does not work with DVR

2016-02-03 Thread Brian Haley
Public bug reported:

Using a recent single-node devstack install, I tried to enable IPv6
prefix delegation following the exact instructions in the advanced
networking guide - http://docs.openstack.org/liberty/networking-
guide/adv_config_ipv6.html

$ neutron net-create ipv6-pd
$ neutron subnet-create ipv6-pd --name ipv6-pd-1 --ip_version 6 --ipv6_ra_mode 
slaac --ipv6_address_mode slaac
$ neutron router-interface-add ...

The l3-agent threw an exception:

2016-02-02 10:28:07.014 23328 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qrouter-f932415c-2cbd-4bcf-a54e-d7f7ee890908', 'ip', '-6', 'addr', 'add', 
'fe80::f816:3eff:fe79:188c/64', 'scope', 'link', 'dev', 'qg-e4629b84-e8'] 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2016-02-02 10:28:07.068 23328 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot find device "qg-e4629b84-e8"

2016-02-02 10:28:07.069 23328 DEBUG oslo_concurrency.lockutils [-] Lock 
"l3-agent-pd" released by "neutron.agent.linux.pd.enable_subnet" :: held 0.055s 
inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info [-] Exit code: 
1; Stdin: ; Stdout: ; Stderr: Cannot find device "qg-e4629b84-e8"
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/common/utils.py", line 368, in call
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 736, in process
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 388, in 
_process_internal_ports
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
interface_name, p['mac_address'])
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info return 
f(*args, **kwargs)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/pd.py", line 81, in enable_subnet
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
self._add_lla(router, pd_info.get_bind_lla_with_mask())
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/pd.py", line 196, in _add_lla
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 'link')
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 194, in 
add_ipv6_addr
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
device.addr.add(str(net), scope)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 532, in add
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
self._as_root([net.version], tuple(args))
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 322, in _as_root
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 95, in _as_root
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 104, in _execute
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info 
log_fail_as_error=log_fail_as_error)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 140, in execute
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info raise 
RuntimeError(msg)
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info RuntimeError: 
Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "qg-e4629b84-e8"
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info
2016-02-02 10:28:07.069 23328 ERROR neutron.agent.l3.router_info
2016-02-02 10:28:07.075 23328 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router 'f932415c-2cbd-4bcf-a54e-d7f7ee890908'
2016-02-02 10:28:07.075 23328 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2

[Yahoo-eng-team] [Bug 1442682] Re: Gate problems with gate-tempest-dsvm-neutron-src-python-glanceclient-juno

2016-02-03 Thread Kairat Kushaev
Juno is EOL, so marking this as Won't Fix.

** Changed in: glance
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1442682

Title:
  Gate problems with gate-tempest-dsvm-neutron-src-python-glanceclient-
  juno

Status in Glance:
  Won't Fix

Bug description:
  
  Not sure which project to log this against. Going with 'glance' for now.

  Seen on a couple of separate client changes, only 'gate-tempest-dsvm-
  neutron-src-python-glanceclient-juno' failing (I assume this is
  against the juno stable server?)

  
  eg 
http://logs.openstack.org/58/115958/6/check/gate-tempest-dsvm-neutron-src-python-glanceclient-juno/8381e11/logs/devstacklog.txt.gz

  
   2015-04-10 08:49:50.238 | WARNING: urllib3.connectionpool HttpConnectionPool 
is full, discarding connection: 127.0.0.1
   2015-04-10 08:49:50.241 | ERROR: openstack 400 Bad Request: Client 
disconnected before sending all data to backend (HTTP  400)
   2015-04-10 08:49:50.274 | + ramdisk_id=
   2015-04-10 08:49:50.274 | + openstack --os-token 
6c7c8feefd304cdab694b24c7168c1dc --os-url http://127.0.0.1:9292 image  create 
cirros-0.3.2-x86_64-uec --public --container-format ami --disk-format ami
   2015-04-10 08:50:23.688 | WARNING: urllib3.connectionpool HttpConnectionPool 
is full, discarding connection: 127.0.0.1
   2015-04-10 08:50:23.689 | ERROR: openstack 400 Bad Request: Client 
disconnected before sending all data to backend (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1442682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535201] Re: networking-midonet icehouse and juno branch retirement

2016-02-03 Thread Kyle Mestery
The tags "icehouse-eol" and "juno-eol" have been pushed to the
respective branches in the networking-midonet repo.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535201

Title:
  networking-midonet icehouse and juno branch retirement

Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  this bug tracks the status of retirement of icehouse and juno branch
  for networking-midonet project.

  see http://lists.openstack.org/pipermail/openstack-
  announce/2015-December/000869.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1535201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-02-03 Thread sumit
** Changed in: barbican
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  In Progress
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  In Progress
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  New
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  In Progress
Status in Evoque:
  In Progress
Status in gce-api:
  In Progress
Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in Gnocchi:
  In Progress
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  In Progress
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  In Progress
Status in networking-vsphere:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in oslo.cache:
  Fix Released
Status in oslo.privsep:
  In Progress
Status in Packstack:
  Fix Released
Status in python-dracclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in shaker:
  In Progress
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  In Progress
Status in trove-dashboard:
  In Progress
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528201] Re: [L3 HA] Stacktrace during HA router creation when running rally test

2016-02-03 Thread Ivan Lozgachev
Verified on ENV-10 Build 482 and ENV-14 Build 496

** Changed in: mos
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528201

Title:
  [L3 HA] Stacktrace during HA router creation when running rally test

Status in Mirantis OpenStack:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Rally test create_and_list_routers fails in some cases.

  Failure is caused by L3_HA_NAT_db_mixin._set_vr_id/_allocate_vr_id
  trying to reuse db session after it has been rolled back by
  DBDuplicateError exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1528201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541489] [NEW] disable tests if panel are disabled

2016-02-03 Thread Itxaka Serrano
Public bug reported:

If you disable any panel, all the tests inside that panel will

 - Be run
 - Fail to resolve the reverse url if any


Tests inside Panels should only run if that panel is enabled.


How to test:

 - Create a file __disable_admin_image.py in 
openstack_dashboard/dashboards/local/enable/ with the following content:
PANEL = 'images'
PANEL_DASHBOARD = 'admin'
PANEL_GROUP = 'admin'
REMOVE_PANEL = True

 - Run openstack_dashboard and confirm that the images panel under admin is gone
 - Run the tests and observe the images tests failing with NoReverseMatch

** Affects: horizon
 Importance: Undecided
 Assignee: Itxaka Serrano (itxakaserrano)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Itxaka Serrano (itxakaserrano)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541489

Title:
  disable tests if panel are disabled

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If you disable any panel, all the tests inside that panel will

   - Be run
   - Fail to resolve the reverse url if any

  
  Tests inside Panels should only run if that panel is enabled.


  How to test:

   - Create a file __disable_admin_image.py in 
openstack_dashboard/dashboards/local/enable/ with the following content:
  PANEL = 'images'
  PANEL_DASHBOARD = 'admin'
  PANEL_GROUP = 'admin'
  REMOVE_PANEL = True

   - Run openstack_dashboard and confirm that the images panel under admin is 
gone
   - Run the tests and observe the images tests failing with NoReverseMatch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541487] [NEW] glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api hangs when run with testtools

2016-02-03 Thread Victor Stinner
Public bug reported:

When glance.tests.integration.v2.test_tasks_api is run directly with
testtools, the test fails:

"python -u -m testtools.run
glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api"

For an unknown reason, the test pass when run with testr:

"testr run
glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api"

It looks like the _wait_on_task_execution() method of
glance/tests/integration/v2/test_tasks_api.py is not reliable. The
method uses time.sleep() to give time to the "server" to execute a task
run in background. Problem: in practice, the "server" is in the same
process than the client, eventlet is used to scheduled tasks of the
server. time.sleep() really blocks the whole process, including the
server which is supposed to run the task.

Sorry, I'm unable to explain why the test pass with testr, eventlet,
taskflow, etc. are too magic for my little brain :-)

IMHO we must enable monkey-patch to run Glance unit and integration
tests.

Or at least, _wait_on_task_execution() must call eventlet.sleep(), not
time.sleep().

Note: time.sleep() is not monkey-patched when the test is run with
testtools or testr, the test runner doesn't change that.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1541487

Title:
  glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api
  hangs when run with testtools

Status in Glance:
  New

Bug description:
  When glance.tests.integration.v2.test_tasks_api is run directly with
  testtools, the test fails:

  "python -u -m testtools.run
  glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api"

  For an unknown reason, the test pass when run with testr:

  "testr run
  glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api"

  It looks like the _wait_on_task_execution() method of
  glance/tests/integration/v2/test_tasks_api.py is not reliable. The
  method uses time.sleep() to give time to the "server" to execute a
  task run in background. Problem: in practice, the "server" is in the
  same process than the client, eventlet is used to scheduled tasks of
  the server. time.sleep() really blocks the whole process, including
  the server which is supposed to run the task.

  Sorry, I'm unable to explain why the test pass with testr, eventlet,
  taskflow, etc. are too magic for my little brain :-)

  IMHO we must enable monkey-patch to run Glance unit and integration
  tests.

  Or at least, _wait_on_task_execution() must call eventlet.sleep(), not
  time.sleep().

  Note: time.sleep() is not monkey-patched when the test is run with
  testtools or testr, the test runner doesn't change that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1541487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541525] [NEW] the ui for managing volume attachments is confusing

2016-02-03 Thread Eric Peterson
Public bug reported:

My users have been complaining that horizon's ui for managing volume
attachment to vms is very strange / tough to use.   Would like to see
some other approach / ideas.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541525

Title:
  the ui for managing volume attachments is confusing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  My users have been complaining that horizon's ui for managing volume
  attachment to vms is very strange / tough to use.   Would like to see
  some other approach / ideas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541540] [NEW] Implied role "root_role" config needs to be expanded

2016-02-03 Thread Morgan Fainberg
Public bug reported:

The "root_role" option is insufficient for blocking "implied" roles.
This needs to be expanded to where a list opt makes sense. There will
likely be many cases where more than one role should never be allowed to
be implied, for example "domain admin" if the domain admin needs to come
from SSO.

Suggest making it an option that is a listopt and calling it something
not "root_role".

** Affects: keystone
 Importance: High
 Assignee: Adam Young (ayoung)
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541540

Title:
  Implied role "root_role" config needs to be expanded

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The "root_role" option is insufficient for blocking "implied" roles.
  This needs to be expanded to where a list opt makes sense. There will
  likely be many cases where more than one role should never be allowed
  to be implied, for example "domain admin" if the domain admin needs to
  come from SSO.

  Suggest making it an option that is a listopt and calling it something
  not "root_role".

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1541540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541560] [NEW] Updating a network can result in an internal server error

2016-02-03 Thread James Anziano
Public bug reported:

When updating a network with an attribute not technically in the network
database table, but still a valid attribute (i.e. dns_domain, which is
used by network but stored in a separate table), neutron attempts to
update the network table with that field and throws an exception.

To reproduce:
neutron net-update my_network --dns_domain=my-domain.org.
which results in:
Request Failed: internal server error while processing your request.

Openstack version: DevStack all-in-one built from master
Ubuntu 14.04, kernel 3.13.0-24-generic
Perceived severity: medium

Logs of the traceback from q-svc:
http://paste.openstack.org/show/485883/

** Affects: neutron
 Importance: Undecided
 Assignee: James Anziano (janzian)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => James Anziano (janzian)

** Description changed:

  When updating a network with an attribute not technically in the network
  database table, but still a valid attribute (i.e. dns_domain, which is
  used by network but stored in a separate table), neutron attempts to
  update the network table with that field and throws an exception.
  
  To reproduce:
- neutron net-update --dns_domain=my-domain.org.
- which results in: 
+ neutron net-update my_network --dns_domain=my-domain.org.
+ which results in:
  Request Failed: internal server error while processing your request.
  
  Openstack version: DevStack all-in-one built from master
  Ubuntu 14.04, kernel 3.13.0-24-generic
  Perceived severity: medium
  
  Logs of the traceback from q-svc:
  http://paste.openstack.org/show/485883/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541560

Title:
  Updating a network can result in an internal server error

Status in neutron:
  New

Bug description:
  When updating a network with an attribute not technically in the
  network database table, but still a valid attribute (i.e. dns_domain,
  which is used by network but stored in a separate table), neutron
  attempts to update the network table with that field and throws an
  exception.

  To reproduce:
  neutron net-update my_network --dns_domain=my-domain.org.
  which results in:
  Request Failed: internal server error while processing your request.

  Openstack version: DevStack all-in-one built from master
  Ubuntu 14.04, kernel 3.13.0-24-generic
  Perceived severity: medium

  Logs of the traceback from q-svc:
  http://paste.openstack.org/show/485883/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541579] [NEW] Port based HealthMonitor in neutron_lbaas

2016-02-03 Thread Maharajan Nachiappan
Public bug reported:

Summary:
Neutron LbaaS lags port based monitoring. 

Description:
The current HealthMonitory attached to pool that monitors the member port by 
default. But some use case may run their service monitoring in a different port 
rather than the service port, these type of ECV is incapable in the current 
HealthMonitoring Object. 

Expected:
We have to have a new field called 'destination': 'ip:port', since most of the 
external LBs support this feature and organizations uses it. since pool can 
have multiple HealthMonitors attached to it.

'destination': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '*:*',
'is_visible': True},

Version: Kilo/Liberty

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541579

Title:
  Port based HealthMonitor in neutron_lbaas

Status in neutron:
  New

Bug description:
  Summary:
  Neutron LbaaS lags port based monitoring. 

  Description:
  The current HealthMonitory attached to pool that monitors the member port by 
default. But some use case may run their service monitoring in a different port 
rather than the service port, these type of ECV is incapable in the current 
HealthMonitoring Object. 

  Expected:
  We have to have a new field called 'destination': 'ip:port', since most of 
the external LBs support this feature and organizations uses it. since pool can 
have multiple HealthMonitors attached to it.

  'destination': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'default': '*:*',
  'is_visible': True},

  Version: Kilo/Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540939] Re: Instance delete causing port leak

2016-02-03 Thread Chuck Carmack
Sean, thanks for the info in the channel.  Marking the bug invalid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540939

Title:
  Instance delete causing port leak

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova can cause a neutron port leak after deleting an instance.

  If neutron has the port binding extension installed, then nova uses admin 
credentials to create the port during instance create:
  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L537

  However, during instance delete, nova always uses the user creds:
  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739

  Depending on the neutron policy settings, this can leak ports in
  neutron.

  Can someone explain this behavior?

  We are running on nova kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535049] Re: Filter the image with the name should support fuzzy matching

2016-02-03 Thread Lin Hua Cheng
setting to invalid since reporter mentioned it will be implemented in
searchlight

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1535049

Title:
  Filter the image with the name should support fuzzy matching

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
When I try to filter the image with the image name,I find that it does not 
support fuzzy matching.I need to enter a full name such as   
  "cirros-0.3.4-x86_64-uec".If I enter "cirros",it doesn't work.I think support 
fuzzy matching will be better when I filter the image with the name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1535049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541594] [NEW] Updating image owner to someone else generates a non-intuitive 404 instead of 403

2016-02-03 Thread Luke Wollney
Public bug reported:

When an image owner updates an image's owner to someone else, the update
is prevented (which is a good thing), but with a 404 "Not Found" (not so
good), instead of the 403 "Forbidden".

The reason why Glance returns a 404 "Not Found" is because the image is
re-fetched after being updated, but as the owner and user differ, the
action is forbidden (which get translated into a "not found" because
under normal circumstances a forbidden would tip an attacker off to the
existence of an image), and the update is never committed.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1541594

Title:
  Updating image owner to someone else generates a non-intuitive 404
  instead of 403

Status in Glance:
  New

Bug description:
  When an image owner updates an image's owner to someone else, the
  update is prevented (which is a good thing), but with a 404 "Not
  Found" (not so good), instead of the 403 "Forbidden".

  The reason why Glance returns a 404 "Not Found" is because the image
  is re-fetched after being updated, but as the owner and user differ,
  the action is forbidden (which get translated into a "not found"
  because under normal circumstances a forbidden would tip an attacker
  off to the existence of an image), and the update is never committed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1541594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318721] Re: RPC timeout in all neutron agents

2016-02-03 Thread Launchpad Bug Tracker
This bug was fixed in the package oslo.messaging - 1.3.0-0ubuntu1.4

---
oslo.messaging (1.3.0-0ubuntu1.4) trusty; urgency=medium

  * Backport upstream fix (LP: #1318721):
- d/p/fix-reconnect-race-condition-with-rabbitmq-cluster.patch:
  Redeclare if exception is catched after self.queue.declare() failed.

 -- Hui Xiang   Thu, 17 Dec 2015 16:22:35 +0800

** Changed in: oslo.messaging (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1318721

Title:
  RPC timeout in all neutron agents

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive icehouse series:
  In Progress
Status in Ubuntu Cloud Archive juno series:
  In Progress
Status in neutron:
  Invalid
Status in oslo.messaging:
  Fix Released
Status in neutron package in Ubuntu:
  Invalid
Status in oslo.messaging package in Ubuntu:
  Invalid
Status in neutron source package in Trusty:
  Fix Committed
Status in oslo.messaging source package in Trusty:
  Fix Released

Bug description:
  In the logs the first traceback that happen is this:

  [-] Unexpected exception occurred 1 time(s)... retrying.
  Traceback (most recent call last):
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/excutils.py",
 line 62, in inner_func
  return infunc(*args, **kwargs)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 741, in _consumer_thread

    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 732, in consume
  @excutils.forever_retry_uncaught_exceptions
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 660, in iterconsume
  try:
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 590, in ensure
  def close(self):
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 531, in reconnect
  # to return an error not covered by its transport
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 513, in _connect
  Will retry up to self.max_retries number of times.
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 150, in reconnect
  use the callback passed during __init__()
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", 
line 508, in declare
  self.queue_bind(nowait)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", 
line 541, in queue_bind
  self.binding_arguments, nowait=nowait)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", 
line 551, in bind_to
  nowait=nowait)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", 
line 1003, in queue_bind
  (50, 21),  # Channel.queue_bind_ok
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py",
 line 68, in wait
  return self.dispatch_method(method_sig, args, content)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py",
 line 86, in dispatch_method
  return amqp_method(self, args)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", 
line 241, in _close
  reply_code, reply_text, (class_id, method_id), ChannelError,
  NotFound: Queue.bind: (404) NOT_FOUND - no exchange 
'reply_8f19344531b448c89d412ee97ff11e79' in vhost '/'

  Than an RPC Timeout is raised each second in all the agents

  ERROR neutron.agent.l3_agent [-] Failed synchronizing routers
  TRACE neutron.agent.l3_agent Traceback (most recent call last):
  TRACE neutron.agent.l3_agent   File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
 line 702, in _rpc_loop
  TRACE neutron.agent.l3_agent self.context, router_ids)
  TRACE neutron.agent.l3_agent   File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
 line 79, in get_routers
  TRACE neutron.agent.l3_agent topic=self.topic)
  TRACE neutron.agent.l3_agent   File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/proxy.py",
 line 130, in call
  TRACE neutron.agent.l3_agent exc.info, real_topic, msg.get('method'))
  TRACE neutron.agent.l3_agent Timeout: Timeout while waiting on RPC response - 
topic: "q-l3-plugin", RPC method: "sync_routers" info: ""

  This actually make the agent useless until they are all restarted.


[Yahoo-eng-team] [Bug 1541621] [NEW] Invalid subject fernet token should result in 404 instead of 401

2016-02-03 Thread Guang Yee
Public bug reported:

When a scoped fernet token is no longer valid (i.e. all the roles had
been removed from the scope), token validation should result in 404
instead of 401. According to Keystone V3 API spec, 401 is returned only
if X-Auth-Token is invalid. Invalid X-Subject-Token should yield 404.
Furthermore, auth_token middleware only treat 404 as invalid subject
token and cache it accordingly.  Improper 401 will cause unnecessary
churn as middleware will repeatedly attempt to  re-authenticate the
service user.

https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/_identity.py#L215

To reproduce the problem:

1. get a project scoped token
2. remove all the roles assigned to the user for that project
3. attempt to validate that project-scoped token will result in 401

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541621

Title:
  Invalid subject fernet token should result in 404 instead of 401

Status in OpenStack Identity (keystone):
  New

Bug description:
  When a scoped fernet token is no longer valid (i.e. all the roles had
  been removed from the scope), token validation should result in 404
  instead of 401. According to Keystone V3 API spec, 401 is returned
  only if X-Auth-Token is invalid. Invalid X-Subject-Token should yield
  404. Furthermore, auth_token middleware only treat 404 as invalid
  subject token and cache it accordingly.  Improper 401 will cause
  unnecessary churn as middleware will repeatedly attempt to  re-
  authenticate the service user.

  
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/_identity.py#L215

  To reproduce the problem:

  1. get a project scoped token
  2. remove all the roles assigned to the user for that project
  3. attempt to validate that project-scoped token will result in 401

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1541621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386370] Re: Create Subnet missing cancel button & back/Next button spacing issue

2016-02-03 Thread German Rivera
sorry i accidentally put it fix-release while reading the descriptions


** Changed in: horizon
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1386370

Title:
  Create Subnet missing cancel button & back/Next button spacing issue

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Create Subnet modal wizard is missing the cancel button, and lso
  the space between the Back and Next button is extremely large.   To
  match other modals, Create Network for example, there needs to be a
  cancel button.Same goes for the space between the buttons, on all
  other modals the buttons are grouped together.

  I attached an image showing the issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1386370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541656] [NEW] OAuth Identity token gives Forbidden

2016-02-03 Thread Bogdan
Public bug reported:

I have enabled OAuth1 in Keystone Kilo, then followed the flow described here:
https://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-oauth1-ext.html#delegated-authentication-flow

Created a consumer, created a request token, authorized the request
token, exchanged it for an access token and finally obtained Identity
token out of the access token, which looks like:

HTTP/1.1 201 Created
Date: Thu, 04 Feb 2016 00:20:13 GMT
Server: Apache/2.4.10 (Linux/SUSE)
Content-Length: 7982
X-Subject-Token: 5bae545dc72d499bb3ec2792c9e53cbd
Vary: X-Auth-Token
x-openstack-request-id: req-241f91a2-8bc5-44a0-8676-8f521e074475
Content-Type: application/json

{"token": {"methods": ["oauth1"], "roles": [{"id":
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "expires_at":
"2016-02-04T01:20:13.114596Z", "project": {"domain": {"id": "default",
"name": "Default"}, "id": ..skipped catalog, etc. "OS-
OAUTH1": {"access_token_id": "f718a55aeae24fa1930b726cbd41b378",
"consumer_id": "979f33d9d2c54fd4ae9d5ed3c2c8f61b"}}}


Then when I try to use the token for example to list servers:

openstack --os-token 5bae545dc72d499bb3ec2792c9e53cbd --os-auth-url
https://host:5000/v3 --os-identity-api-version 3 --os-cacert
/etc/pki/trust/anchors/ca.pem --os-project-name Project1 server list

I get a surprising error:

Forbidden: You are not authorized to perform the requested action. (Disable 
debug mode to suppress these details.) (HTTP 403) (Request-ID: 
req-34f9098e-7f5d-45e6-95b6-6f4cac87159e)
 
After some debugging I found out that my call gets rejected at:

def token_authenticate(context, auth_payload, user_context, token_ref):
try:

# Do not allow tokens used for delegation to
# create another token, or perform any changes of
# state in Keystone. To do so is to invite elevation of
# privilege attacks

if token_ref.oauth_scoped or token_ref.trust_scoped:
raise exception.Forbidden()

What am I missing here? My token definitely is oauth_scoped and how am I
supposed to use this Identity token?

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541656

Title:
  OAuth Identity token gives Forbidden

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have enabled OAuth1 in Keystone Kilo, then followed the flow described here:
  
https://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-oauth1-ext.html#delegated-authentication-flow

  Created a consumer, created a request token, authorized the request
  token, exchanged it for an access token and finally obtained Identity
  token out of the access token, which looks like:

  HTTP/1.1 201 Created
  Date: Thu, 04 Feb 2016 00:20:13 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Content-Length: 7982
  X-Subject-Token: 5bae545dc72d499bb3ec2792c9e53cbd
  Vary: X-Auth-Token
  x-openstack-request-id: req-241f91a2-8bc5-44a0-8676-8f521e074475
  Content-Type: application/json

  {"token": {"methods": ["oauth1"], "roles": [{"id":
  "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}],
  "expires_at": "2016-02-04T01:20:13.114596Z", "project": {"domain":
  {"id": "default", "name": "Default"}, "id": ..skipped catalog,
  etc. "OS-OAUTH1": {"access_token_id":
  "f718a55aeae24fa1930b726cbd41b378", "consumer_id":
  "979f33d9d2c54fd4ae9d5ed3c2c8f61b"}}}

  
  Then when I try to use the token for example to list servers:

  openstack --os-token 5bae545dc72d499bb3ec2792c9e53cbd --os-auth-url
  https://host:5000/v3 --os-identity-api-version 3 --os-cacert
  /etc/pki/trust/anchors/ca.pem --os-project-name Project1 server list

  I get a surprising error:

  Forbidden: You are not authorized to perform the requested action. (Disable 
debug mode to suppress these details.) (HTTP 403) (Request-ID: 
req-34f9098e-7f5d-45e6-95b6-6f4cac87159e)
   
  After some debugging I found out that my call gets rejected at:

  def token_authenticate(context, auth_payload, user_context, token_ref):
  try:

  # Do not allow tokens used for delegation to
  # create another token, or perform any changes of
  # state in Keystone. To do so is to invite elevation of
  # privilege attacks

  if token_ref.oauth_scoped or token_ref.trust_scoped:
  raise exception.Forbidden()

  What am I missing here? My token definitely is oauth_scoped and how am
  I supposed to use this Identity token?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1541656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541657] [NEW] Scoped OS-FEDERATION token not working

2016-02-03 Thread Bogdan
Public bug reported:

I have implemented Keystone Federation scenario with Kilo against a non-
Keystone IdP.

Following the flow described at https://specs.openstack.org/openstack
/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html I
successfully went through SAML2 authentication and I ended up with an
unscoped token which is working just fine.

When I then request a scoped token out of the unscoped token I get a token 
which differs from the documentation:
docs says that user will have groups:

"user": {
"domain": {
"id": "Federated"
},
"id": "username%40example.com",
"name": "usern...@example.com",
"OS-FEDERATION": {
"identity_provider": "ACME",
"protocol": "SAML",
"groups": [
{"id": "abc123"},
{"id": "bcd234"}
]
}
}

while in my implementation I get user with no groups (in contrast my unscoped 
token has the groups in user) :
"user": {
"domain": {
"id": "Federated",
"name": "Federated"
},
"id": "myUser",
"name": "myUser"
"OS-FEDERATION": {
"identity_provider": {
"id": "myIdP"
},
"protocol": {"id": "saml2"}
  }
}

If I try to use the scoped token I get the error message:
# openstack --os-token 3e68789050944e9296f1e366f63a31a8 --os-auth-url 
https://host:5000/v3 --os-identity-api-version 3 --os-cacert 
/etc/pki/trust/anchors/ca.pem --os-project-name Project1 server list
ERROR: openstack Unable to find valid groups while using mapping saml_mapping 
(Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-eb23e61c-6f1f-4259-8ff0-92063f60b5f0)

And this is no surprise if we debug the code for token creation and see
that **_handle_mapped_tokens** in /usr/lib/python2.7/site-
packages/keystone/token/providers/common.py says:

if project_id or domain_id:
roles = self.v3_token_data_helper._populate_roles_for_groups(
group_ids, project_id, domain_id, user_id)
token_data.update({'roles': roles})
else:
token_data['user'][federation.FEDERATION].update({
'groups': [{'id': x} for x in group_ids]
})
return token_data

So, the only way to get our groups added to the scoped token is to NOT
use domain or project scoping, but if we do not scope the token for
domain or project then we will simply get yet another unscoped token ;).


What am I missing? How am I supposed to create a scoped token which works?

Thanks in advance!

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541657

Title:
  Scoped OS-FEDERATION token not working

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have implemented Keystone Federation scenario with Kilo against a
  non-Keystone IdP.

  Following the flow described at https://specs.openstack.org/openstack
  /keystone-specs/api/v3/identity-api-v3-os-federation-ext.html I
  successfully went through SAML2 authentication and I ended up with an
  unscoped token which is working just fine.

  When I then request a scoped token out of the unscoped token I get a token 
which differs from the documentation:
  docs says that user will have groups:

  "user": {
  "domain": {
  "id": "Federated"
  },
  "id": "username%40example.com",
  "name": "usern...@example.com",
  "OS-FEDERATION": {
  "identity_provider": "ACME",
  "protocol": "SAML",
  "groups": [
  {"id": "abc123"},
  {"id": "bcd234"}
  ]
  }
  }

  while in my implementation I get user with no groups (in contrast my unscoped 
token has the groups in user) :
  "user": {
"domain": {
"id": "Federated",
"name": "Federated"
},
"id": "myUser",
"name": "myUser"
"OS-FEDERATION": {
"identity_provider": {
"id": "myIdP"
},
"protocol": {"id": "saml2"}
  }
  }

  If I try to use the scoped token I get the error message:
  # openstack --os-token 3e68789050944e9296f1e366f63a31a8 --os-auth-url 
https://host:5000/v3 --os-identity-api-version 3 --os-cacert 
/etc/pki/trust/anchors/ca.pem --os-project-name Project1 server list
  ERROR: openstack Unable to find valid groups while using mapping saml_mapping 
(Disable debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-eb23e61c-6f1f-4259-8ff0-92063f60b5f0)

  And this is no surprise if we debug the code for token creation and
  see that **_handle_mapped_tokens** in /usr/lib/python2.7/site-
  packages/keystone/token/providers/common.py says:

  if project_id or domain_id:
  roles = self.v3_token_data_helper._populate_roles_for_groups(

[Yahoo-eng-team] [Bug 1540386] Re: Unexpected API Error

2016-02-03 Thread Augustina Ragwitz
Thanks for taking the time to submit a bug! Unfortunately the error
message you get is a bit misleading. Please verify your configuration
and if you find something more specific that looks like a bug related to
this issue, feel free to reopen it by setting the Status as New.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540386

Title:
  Unexpected  API Error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I try to launch via the web-interface an ubuntu image generated with qemu.
  The error message I received is the following:

  Error: Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP 500) (Request-ID: req-
  ce828cb7-6cca-45c1-9c3b-6703110e03a6)

  I do not know where to locate the nova log. Sorry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541670] [NEW] lbaas tests gate on dib

2016-02-03 Thread Doug Wiegley
Public bug reported:

... but dib isn't gated anywhere in real-time like octavia, since
nodepool uses async built images. This causes octavia to break on any
dib break, first. This is too brittle.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541670

Title:
  lbaas tests gate on dib

Status in neutron:
  New

Bug description:
  ... but dib isn't gated anywhere in real-time like octavia, since
  nodepool uses async built images. This causes octavia to break on any
  dib break, first. This is too brittle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537903] Re: Glance metadefs for OS::Nova::Instance should be OS::Nova::Server

2016-02-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272271
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=33ae05ee8687af39b3bddc5b448ace2a412dbf68
Submitter: Jenkins
Branch:master

commit 33ae05ee8687af39b3bddc5b448ace2a412dbf68
Author: Travis Tripp 
Date:   Mon Jan 25 14:01:09 2016 -0700

Change Metadefs OS::Nova::Instance to OS::Nova::Server

The metadata definitions in etc/metadefs allow each namespace to be 
associated
with a resource type in OpenStack. Now that Horizon is supporting adding
metadata to instances (just got in during the mitaka cycle - so 
unrealeased),
I realized that we used OS::Nova::Instance instead of OS::Nova::Server in
Glance. This doesn’t align with Heat [0] or Searchlight [1].

There are a couple of metadef files that have OS::Nova::Instance that need 
to
change to OS::Nova::Server. I see also that OS::Nova:Instance is in one of
the db scripts. That script simply adds some initial "resource types" to the
database. [3]. It should be noted that there is no hard dependency on that
resource type to be in the DB script. You can add new resource types at any
time via API or JSON files and they are automatically added.

I'm not sure if the change to the db script needs to be done in a different
patch or not, but that can easily be accommodated.

See bug for additional links.

Change-Id: I196ce1d9a62a61027ccd444b17a30b2c018d9c84
Closes-Bug: 1537903


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1537903

Title:
  Glance metadefs for OS::Nova::Instance should be OS::Nova::Server

Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The metadata definitions in etc/metadefs allow each namespace to be
  associated with a resource type in OpenStack.  Now that Horizon is
  supporting adding metadata to instances (just got in during the mitaka
  cycle - so unrealeased), I realized that we used OS::Nova::Instance
  instead of OS::Nova::Server in Glance.  This doesn’t align with Heat
  [0] or Searchlight [1].

  Glance Issue:

  There are a couple of metadef files that have OS::Nova::Instance that
  need to change to OS::Nova::Server.  I see also that OS::Nova:Instance
  is in one of the db scripts.  That script simply adds some initial
  "resource types" to the database. [3]. It should be noted that there
  is no hard dependency on that resource type in the DB script. You can
  add new resource types at any time via API or JSON files and they are
  automatically added.

  I'm not sure if the change to the db script needs to be done in a
  different patch or not, but that can easily be accommodated.

  Horizon Issue:

  The instance update metadata action and NG launch instance should
  retrieve OS::Nova::Server instead.  The Horizon patch shouldn't merge
  until the glance patch merges, but there is not an actual hard
  dependency between the two.

  [0] 
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server
  [1] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/servers.py#L35
  [3] 
https://github.com/openstack/glance/search?utf8=%E2%9C%93&q=%22OS%3A%3ANova%3A%3AInstance%22

  Finally:

  It should be noted that updating namespaces in Glance is already
  possible with glance-manage. E.g.

  ttripp@ubuntu:/opt/stack/glance$ glance-manage db_load_metadefs etc/metadefs 
-h
  usage: glance-manage db_load_metadefs [-h]
[path] [merge] [prefer_new] [overwrite]

  positional arguments:
path
merge
prefer_new
overwrite

  So, you just have to call:

  ttripp@ubuntu:/opt/stack/glance$ glance-manage db_load_metadefs
  etc/metadefs true true

  See also: https://youtu.be/zJpHXdBOoeM

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1537903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537292] Re: 'web_framework' option has no effect

2016-02-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/271587
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d21d9798fe321c54941af5873b0202abc47f56d8
Submitter: Jenkins
Branch:master

commit d21d9798fe321c54941af5873b0202abc47f56d8
Author: Kevin Benton 
Date:   Fri Jan 22 15:20:50 2016 -0800

Don't decide web_framework before config parse

This ensures that the config files passed on the CLI
are parsed before the choice is made based on config options
to launch pecan/homegrown wsgi handler.

Closes-Bug: #1537292
Change-Id: I577befea74d84d611cd198c92c54ae3651e2d921


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537292

Title:
  'web_framework' option has no effect

Status in neutron:
  Fix Released

Bug description:
  The 'web_framework' option does not work correctly when the process is
  invoked via the command line because the --config-file arguments are
  not loaded at the time it is called so it always defaults to 'legacy'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251224] Re: ICMP security group rules should have a type and code params instead of using "--port-range-min" and "--port-range-max"

2016-02-03 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251224

Title:
  ICMP security group rules should have a type and code params instead
  of using "--port-range-min" and "--port-range-max"

Status in neutron:
  Expired

Bug description:
  Version
  ==
  Havana on rhel

  Description
  =
  I couldn't find a doc specifying whether icmp security group rules ignore the 
"--port-range-min" and "--port-range-max" params or use then as code and type 
as we know from nova security group rules.
  I think that it should be:

  i. Well documented.
  ii. prohibited for use of "--port-range-min" and "--port-range-max" in icmp 
rules context, new switches should be created for code and type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541702] [NEW] Unable to see the attached cinder volume in instance

2016-02-03 Thread Zhou Jianming
Public bug reported:

 I have successfully installed OpenStack kilo + XenServer 6.5. I have added a 
Cinder service it seems to be working because I can connect to my horizon (also 
by command line) and create volumes and attach them to my instances. Instance 
name in OpenStack is aaa and it's name in XenServer is instance-011e. This 
is a print screen from my horizon:
|---
| Name  Size   Status   Attached To   
Availability Zone
| xen_volume2  1GB   In-Use   Attached to aaa on /dev/xvdb   demo
|---
 
  Taking a look on XenCenter connected to my XenServer, seems like the 
ISCSI mapping is
 
working fine, this is a print screen:
|
|Position Name   Description  SR 
|0instance-011e  root Local storage on xenserver2
|1
tempSR-b76853a3-86f2-4f0a-b78e-eaa4c883172a
|
 
  The tempSR-b76853a3... is the mmaping of the xen_volume2 to my aaa. This 
storage's type is VDI-per-LUN iSCSI. According to the cinder mapping 
information the device is mapped to xvdb, but I cant find this volume (dmesg or 
fdisk -l doesn't show anything), I have noticed two strange behaviors.
 
1.  After attach the volume, boot the instance takes a lot of time.

2.  After attach the volume and after restart it (and wait a lot) the dmesg 
contains this:
[ 0.164935] blkfront: xvda: barrier: enabled
[ 0.179729] xvda: xvda1
[ 0.184022] Setting capacity to 2097152
[ 0.184043] xvda: detected capacity change from 0 to 1073741824
[ 5.260049] XENBUS: Waiting for devices to initialise: 
295s...290s...285s...280s...275s...270s...265s...260s...255s...250s...245s...240s...235s...230s...
225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...
155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s...95s...90s...85s
...80s...75s...70s...65s...60s...55s...50s...45s...40s...35s...30s...25s...20s...15s...10s...5s...0s...
 
If I remove the volume and reboot this is not present.
 
/var/lib/messages @ XenServer 6.5 shows
Jul 16 19:32:09 xenserver2 xapi: [info|xenserver2|6599 storage_unix 
|VDI.activate D:b152d00698ae|storage-access] vbd/4/xvdb 
sr:OpaqueRef:e47be30a-b987-a1d8-b7b4-903cca36db53 does not support 
vdi_activate: doing nothing

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1541702

Title:
  Unable to see the attached cinder volume in instance

Status in OpenStack Compute (nova):
  New

Bug description:
   I have successfully installed OpenStack kilo + XenServer 6.5. I have added a 
Cinder service it seems to be working because I can connect to my horizon (also 
by command line) and create volumes and attach them to my instances. Instance 
name in OpenStack is aaa and it's name in XenServer is instance-011e. This 
is a print screen from my horizon:
  
|---
  | Name  Size   Status   Attached To   
Availability Zone
  | xen_volume2  1GB   In-Use   Attached to aaa on /dev/xvdb   demo
  
|---
   
Taking a look on XenCenter connected to my XenServer, seems like the 
ISCSI mapping is
   
  working fine, this is a print screen:
  
|
  |Position Name   Description  SR 
  |0instance-011e  root Local storage on xenserver2
  |1
tempSR-b76853a3-86f2-4f0a-b78e-eaa4c883172a
  
|
   
The tempSR-b76853a3... is the mmaping of the xen_volume2 to my aaa. 
This storage's type is VDI-per-LUN iSCSI. According to the cinder mapping 
information the device is mapped to xvdb, but I cant find this volume (dmesg or 
fdisk -l doesn't show anything), I have noticed two strange behaviors.
   
  1.  After atta

[Yahoo-eng-team] [Bug 1541714] [NEW] DVR routers are not created on a compute node that runs agent in 'dvr' mode

2016-02-03 Thread Swaminathan Vasudevan
Public bug reported:

DVR routers are not created on a compute node that is running L3 agent
in "dvr" mode.

This might have been introduced by the latest patch that changed the scheduling 
behavior.
https://review.openstack.org/#/c/254837/

Steps to reproduce:

1. Stack up two nodes. ( dvr_snat node) and (dvr node)
2. Create a Network
3. Create a Subnet
4. Create a Router
5. Add Subnet to the Router
6. Create a VM on the "dvr_snat" node.
Everything works fine here. We can see the router-namespace, snat-namespace and 
the dhcp-namespace.

7. Now Create a VM and force the VM to be created on the second node ( dvr 
node).
  - nova boot --flavor xyz --image abc --net net-id yyy-id --availability-zone 
nova:dvr-node myinstance2

Now see the image is created in the second node.
But the router namespace is missing in the second node.

The router is scheduled to the dvr-snat node, but not to the compute
node.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541714

Title:
  DVR routers are not created on a compute node that runs agent in 'dvr'
  mode

Status in neutron:
  New

Bug description:
  DVR routers are not created on a compute node that is running L3 agent
  in "dvr" mode.

  This might have been introduced by the latest patch that changed the 
scheduling behavior.
  https://review.openstack.org/#/c/254837/

  Steps to reproduce:

  1. Stack up two nodes. ( dvr_snat node) and (dvr node)
  2. Create a Network
  3. Create a Subnet
  4. Create a Router
  5. Add Subnet to the Router
  6. Create a VM on the "dvr_snat" node.
  Everything works fine here. We can see the router-namespace, snat-namespace 
and the dhcp-namespace.

  7. Now Create a VM and force the VM to be created on the second node ( dvr 
node).
- nova boot --flavor xyz --image abc --net net-id yyy-id 
--availability-zone nova:dvr-node myinstance2

  Now see the image is created in the second node.
  But the router namespace is missing in the second node.

  The router is scheduled to the dvr-snat node, but not to the compute
  node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp