[Yahoo-eng-team] [Bug 2006580] [NEW] Netowrk quota missing on Horizon

2023-02-08 Thread Rico Lin
Public bug reported:

If we compare network quota from project list and CLI, we can found some quota 
is missing, like `RBAC_policy`. Ilm not quite familiar with the previous 
discussion around this, but if it's possible, I think it will definitely be 
beneficial to add quota edit support for projects.
This is the list that currently shows in quota default:
Endpoint Group
Floating IPs
Ikepolicy
Ipsec Site Connection
Ipsecpolicy
Networks
Ports
RBAC Policies
Routers
Security Group Rules
Security Groups
Subnet Pool
Subnets
Trunks
Vpnservice

And this is the list we support modified with:
Networks
Subnets 
Ports 
Routers 
Floating IPs 
Security Groups 
Security Group Rules

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2006580

Title:
  Netowrk quota missing on Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If we compare network quota from project list and CLI, we can found some 
quota is missing, like `RBAC_policy`. Ilm not quite familiar with the previous 
discussion around this, but if it's possible, I think it will definitely be 
beneficial to add quota edit support for projects.
  This is the list that currently shows in quota default:
  Endpoint Group
  Floating IPs
  Ikepolicy
  Ipsec Site Connection
  Ipsecpolicy
  Networks
  Ports
  RBAC Policies
  Routers
  Security Group Rules
  Security Groups
  Subnet Pool
  Subnets
  Trunks
  Vpnservice

  And this is the list we support modified with:
  Networks
  Subnets 
  Ports 
  Routers 
  Floating IPs 
  Security Groups 
  Security Group Rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2006580/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921075] [NEW] [arm64][libvirt] fail to load json from firmware metadata files

2021-03-24 Thread Rico Lin
Public bug reported:

Found error in [3] for libvirt with Ubuntu focal on arm64. We fail to
load JSON from QEMU firmware metadata files with error [1][2]:

Instance failed to spawn: TypeError: can't concat str to bytes
Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2620, in _build_resources
yield resources
  File "/opt/stack/nova/nova/compute/manager.py", line 2389, in 
_build_and_run_instance
self.driver.spawn(context, instance, image_meta,
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3877, in spawn
xml = self._get_guest_xml(context, instance, network_info,
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6721, in 
_get_guest_xml
conf = self._get_guest_config(instance, network_info, image_meta,
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6334, in 
_get_guest_config
self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5943, in 
_configure_guest_by_virt_type
loader, nvram_template = self._host.get_loader(
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1636, in get_loader
for loader in self.loaders:
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1619, in loaders
self._loaders = _get_loaders()
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 112, in _get_loaders
spec = jsonutils.load(fh)
  File 
"/usr/local/lib/python3.8/dist-packages/oslo_serialization/jsonutils.py", line 
261, in load
return json.load(codecs.getreader(encoding)(fp), **kwargs)
  File "/usr/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
  File "/usr/lib/python3.8/codecs.py", line 500, in read
data = self.bytebuffer + newdata
TypeError: can't concat str to byte


Enviro
[1] http://paste.openstack.org/show/803788/
[2] 
https://zuul.opendev.org/t/openstack/build/312d8e45b079460496d90f1d940c174c/log/controller/logs/screen-n-cpu.txt#22708
[3] https://review.opendev.org/c/openstack/devstack/+/708317

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- We fail to load JSON from QEMU firmware metadata files with error
- [1][2]:
+ Found error in [3] for libvirt with Ubuntu focal on arm64. We fail to
+ load JSON from QEMU firmware metadata files with error [1][2]:
  
  Instance failed to spawn: TypeError: can't concat str to bytes
  Traceback (most recent call last):
-   File "/opt/stack/nova/nova/compute/manager.py", line 2620, in 
_build_resources
- yield resources
-   File "/opt/stack/nova/nova/compute/manager.py", line 2389, in 
_build_and_run_instance
- self.driver.spawn(context, instance, image_meta,
-   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3877, in spawn
- xml = self._get_guest_xml(context, instance, network_info,
-   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6721, in 
_get_guest_xml
- conf = self._get_guest_config(instance, network_info, image_meta,
-   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6334, in 
_get_guest_config
- self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
-   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5943, in 
_configure_guest_by_virt_type
- loader, nvram_template = self._host.get_loader(
-   File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1636, in get_loader
- for loader in self.loaders:
-   File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1619, in loaders
- self._loaders = _get_loaders()
-   File "/opt/stack/nova/nova/virt/libvirt/host.py", line 112, in _get_loaders
- spec = jsonutils.load(fh)
-   File 
"/usr/local/lib/python3.8/dist-packages/oslo_serialization/jsonutils.py", line 
261, in load
- return json.load(codecs.getreader(encoding)(fp), **kwargs)
-   File "/usr/lib/python3.8/json/__init__.py", line 293, in load
- return loads(fp.read(),
-   File "/usr/lib/python3.8/codecs.py", line 500, in read
- data = self.bytebuffer + newdata
+   File "/opt/stack/nova/nova/compute/manager.py", line 2620, in 
_build_resources
+ yield resources
+   File "/opt/stack/nova/nova/compute/manager.py", line 2389, in 
_build_and_run_instance
+ self.driver.spawn(context, instance, image_meta,
+   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3877, in spawn
+ xml = self._get_guest_xml(context, instance, network_info,
+   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6721, in 
_get_guest_xml
+ conf = self._get_guest_config(instance, network_info, image_meta,
+   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6334, in 
_get_guest_config
+ self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
+   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5943, in 
_configure_guest_by_virt_type
+ loader, nvram_template = self._host.get_loader(
+   File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1636, in get_loader
+ for 

[Yahoo-eng-team] [Bug 1921073] [NEW] [arm64][libvirt] firmware metadata files not found for arm64 on ubuntu 18.04

2021-03-24 Thread Rico Lin
Public bug reported:

>From devstack arm64 job patch [1], I found this error [2][3] when using
bionic images on arm64 environment:


Failed to build and run instance: nova.exception.InternalError: Failed to 
locate firmware descriptor files
Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2393, in 
_build_and_run_instance
accel_info=accel_info)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3880, in spawn
mdevs=mdevs, accel_info=accel_info)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6723, in 
_get_guest_xml
context, mdevs, accel_info)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6334, in 
_get_guest_config
self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5945, in 
_configure_guest_by_virt_type
has_secure_boot=guest.os_loader_secure)
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1636, in get_loader
for loader in self.loaders:
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1619, in loaders
self._loaders = _get_loaders()
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 102, in _get_loaders
raise exception.InternalError(msg)
nova.exception.InternalError: Failed to locate firmware descriptor files

As I move [1] to use the focal version, this error message disappeared.
The wired part is I do can locate firmware descriptor files on my AWS bionic 
arm64 test environment right at the same demanded path. So I'm not sure what 
exactly happened there


[1] https://review.opendev.org/c/openstack/devstack/+/708317

[2] 
https://zuul.opendev.org/t/openstack/build/77b0d998c9f14e1b859467016dfb7852/log/controller/logs/screen-n-cpu.txt#9821
  
[3] http://paste.openstack.org/show/803786/

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- firmware metadata files not found for arm64 on ubuntu 18.04
+ [arm64][libvirt] firmware metadata files not found for arm64 on ubuntu 18.04

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1921073

Title:
  [arm64][libvirt] firmware metadata files not found for arm64 on ubuntu
  18.04

Status in OpenStack Compute (nova):
  New

Bug description:
  From devstack arm64 job patch [1], I found this error [2][3] when
  using bionic images on arm64 environment:

  
  Failed to build and run instance: nova.exception.InternalError: Failed to 
locate firmware descriptor files
  Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2393, in 
_build_and_run_instance
  accel_info=accel_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3880, in spawn
  mdevs=mdevs, accel_info=accel_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6723, in 
_get_guest_xml
  context, mdevs, accel_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6334, in 
_get_guest_config
  self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5945, in 
_configure_guest_by_virt_type
  has_secure_boot=guest.os_loader_secure)
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1636, in get_loader
  for loader in self.loaders:
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 1619, in loaders
  self._loaders = _get_loaders()
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 102, in _get_loaders
  raise exception.InternalError(msg)
  nova.exception.InternalError: Failed to locate firmware descriptor files

  As I move [1] to use the focal version, this error message disappeared.
  The wired part is I do can locate firmware descriptor files on my AWS bionic 
arm64 test environment right at the same demanded path. So I'm not sure what 
exactly happened there

  
  [1] https://review.opendev.org/c/openstack/devstack/+/708317

  [2] 
https://zuul.opendev.org/t/openstack/build/77b0d998c9f14e1b859467016dfb7852/log/controller/logs/screen-n-cpu.txt#9821
  
  [3] http://paste.openstack.org/show/803786/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1921073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1858061] [NEW] python3: glance fail to create image

2020-01-01 Thread Rico Lin
Public bug reported:

Found error [1] in gate [2]
It's not catch anywhere so raise 500 Internal error to heat

Test devstack environment stable/queens with py35


[1] http://paste.openstack.org/show/787997/
[2] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e2e/700584/14/check/heat-functional-convg-queens-py35/e2e3abc/logs/screen-g-api.txt.gz


logs:
 ERROR glance.common.wsgi [None req-9ec8f36b-ec7a-45fd-9a2b-cb2e19e47af8 admin 
admin] Caught error: maximum recursion depth exceeded while calling a Python 
object: RecursionError: maximum recursion depth exceeded while calling a Python 
object
 ERROR glance.common.wsgi Traceback (most recent call last):
 ERROR glance.common.wsgi   File "/opt/stack/new/glance/glance/common/wsgi.py", 
line 1227, in __call__
 ERROR glance.common.wsgi request, **action_args)
 ERROR glance.common.wsgi   File "/opt/stack/new/glance/glance/common/wsgi.py", 
line 1270, in dispatch
 ERROR glance.common.wsgi return method(*args, **kwargs)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/common/utils.py", line 414, in wrapped
 ERROR glance.common.wsgi return func(self, req, *args, **kwargs)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/api/v2/images.py", line 66, in create
 ERROR glance.common.wsgi tags=tags, **image)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/api/authorization.py", line 201, in new_image
 ERROR glance.common.wsgi return super(ImageFactoryProxy, 
self).new_image(owner=owner, **kwargs)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/proxy.py", line 145, in new_image
 ERROR glance.common.wsgi return 
self.helper.proxy(self.base.new_image(**kwargs))
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/proxy.py", line 145, in new_image
 ERROR glance.common.wsgi return 
self.helper.proxy(self.base.new_image(**kwargs))
 ERROR glance.common.wsgi   File "/opt/stack/new/glance/glance/api/policy.py", 
line 219, in new_image
 ERROR glance.common.wsgi return super(ImageFactoryProxy, 
self).new_image(**kwargs)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/proxy.py", line 145, in new_image
 ERROR glance.common.wsgi return 
self.helper.proxy(self.base.new_image(**kwargs))
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/quota/__init__.py", line 130, in new_image
 ERROR glance.common.wsgi return super(ImageFactoryProxy, 
self).new_image(tags=tags, **kwargs)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/proxy.py", line 145, in new_image
 ERROR glance.common.wsgi return 
self.helper.proxy(self.base.new_image(**kwargs))
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/proxy.py", line 39, in proxy
 ERROR glance.common.wsgi return self.proxy_class(obj, **self.proxy_kwargs)
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/quota/__init__.py", line 294, in __init__
 ERROR glance.common.wsgi self.orig_props = 
set(image.extra_properties.keys())
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/__init__.py", line 308, in keys
 ERROR glance.common.wsgi return dict(self).keys()
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/__init__.py", line 308, in keys
 ERROR glance.common.wsgi return dict(self).keys()
 ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/domain/__init__.py", line 308, in keys
 ERROR glance.common.wsgi return dict(self).keys()
 ERROR glance.common.wsgi   [Previous line repeated 203 more times]
 ERROR glance.common.wsgi RecursionError: maximum recursion depth exceeded 
while calling a Python object
 ERROR glance.common.wsgi

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1858061

Title:
  python3: glance fail to create image

Status in Glance:
  New

Bug description:
  Found error [1] in gate [2]
  It's not catch anywhere so raise 500 Internal error to heat

  Test devstack environment stable/queens with py35


  [1] http://paste.openstack.org/show/787997/
  [2] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e2e/700584/14/check/heat-functional-convg-queens-py35/e2e3abc/logs/screen-g-api.txt.gz


  logs:
   ERROR glance.common.wsgi [None req-9ec8f36b-ec7a-45fd-9a2b-cb2e19e47af8 
admin admin] Caught error: maximum recursion depth exceeded while calling a 
Python object: RecursionError: maximum recursion depth exceeded while calling a 
Python object
   ERROR glance.common.wsgi Traceback (most recent call last):
   ERROR glance.common.wsgi   File 
"/opt/stack/new/glance/glance/common/wsgi.py", line 1227, in __call__
   ERROR glance.common.wsgi request, **action_args)
   ERROR glance.common.wsgi   File 

[Yahoo-eng-team] [Bug 1847408] [NEW] Unexpected API Error with Virtual Interface creation failed

2019-10-08 Thread Rico Lin
Public bug reported:

During checking heat job failure at https://0192e1baed07113a07bc-
364710e13fd987d48278cddc9e42329d.ssl.cf2.rackcdn.com/569582/2/gate/heat-
functional-orig-mysql-lbaasv2/3a3b5c7/job-output.txt


Unexpected API Error found in 
https://0192e1baed07113a07bc-364710e13fd987d48278cddc9e42329d.ssl.cf2.rackcdn.com/569582/2/gate/heat-functional-orig-mysql-lbaasv2/3a3b5c7/logs/screen-n-api.txt.gz


Oct 08 16:41:58.733573 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: INFO nova.api.openstack.requestlog [None 
req-d795e47f-e8b5-4c7d-bb6a-ee9e9809eb3b demo demo] 158.69.70.147 "GET 
/compute/v2.1/servers/test_in_place_update" status: 404 len: 95 microversion: 
2.79 time: 0.009925
Oct 08 16:41:58.734390 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: [pid: 6610|app: 0|req: 120/245] 158.69.70.147 () 
{60 vars in 1245 bytes} [Tue Oct  8 16:41:58 2019] GET 
/compute/v2.1/servers/test_in_place_update => generated 95 bytes in 12 msecs 
(HTTP/1.1 404) 9 headers in 380 bytes (1 switches on core 0)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: ERROR nova.api.openstack.wsgi [None 
req-ec844008-6574-4600-9459-551bb8f3281a demo demo] Unexpected exception in API 
method: VirtualInterfaceCreateException_Remote: Virtual Interface creation 
failed
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: Traceback (most recent call last):
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/opt/stack/new/nova/nova/conductor/manager.py", line 135, in _object_dispatch
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: return getattr(target, method)(*args, 
**kwargs)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
226, in wrapper
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: return fn(self, *args, **kwargs)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/opt/stack/new/nova/nova/objects/virtual_interface.py", line 103, in create
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: db_vif = 
db.virtual_interface_create(self._context, updates)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File "/opt/stack/new/nova/nova/db/api.py", line 
706, in virtual_interface_create
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: return IMPL.virtual_interface_create(context, 
values)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 180, in wrapper
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: return f(*args, **kwargs)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 223, in wrapped
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: return f(context, *args, **kwargs)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1565, in 
virtual_interface_create
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: raise 
exception.VirtualInterfaceCreateException()
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: VirtualInterfaceCreateException: Virtual 
Interface creation failed
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: Traceback (most recent call last):
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
165, in _process_incoming
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: res = self.dispatcher.dispatch(message)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
274, in dispatch
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: return self._do_dispatch(endpoint, method, 
ctxt, args)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 
devstack@n-api.service[6609]: result = func(ctxt, **new_args)
Oct 08 16:42:02.184054 ubuntu-bionic-ovh-bhs1-0012213338 

[Yahoo-eng-team] [Bug 1723856] Re: lbaasv2 tests fail with error

2017-10-25 Thread Rico Lin
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723856

Title:
  lbaasv2 tests fail with error

Status in neutron:
  In Progress

Bug description:
  Noticed at:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-lbaasv2/dcd512d/job-output.txt.gz

  
  lbaasv2 agent log:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-
  lbaasv2/dcd512d/logs/screen-q-lbaasv2.txt.gz?#_Oct_16_02_26_51_171646

  
  May be due to https://review.openstack.org/#/c/505701/

  traceback:

  2017-10-16 02:45:43.838922 | primary | 2017-10-16 02:45:43.838 | 
==
  2017-10-16 02:45:43.840365 | primary | 2017-10-16 02:45:43.840 | Failed 2 
tests - output below:
  2017-10-16 02:45:43.842320 | primary | 2017-10-16 02:45:43.841 | 
==
  2017-10-16 02:45:43.843926 | primary | 2017-10-16 02:45:43.843 |
  2017-10-16 02:45:43.845738 | primary | 2017-10-16 02:45:43.845 | 
heat_integrationtests.functional.test_lbaasv2.LoadBalancerv2Test.test_create_update_loadbalancer
  2017-10-16 02:45:43.847384 | primary | 2017-10-16 02:45:43.846 | 

  2017-10-16 02:45:43.848836 | primary | 2017-10-16 02:45:43.848 |
  2017-10-16 02:45:43.850193 | primary | 2017-10-16 02:45:43.849 | Captured 
traceback:
  2017-10-16 02:45:43.851909 | primary | 2017-10-16 02:45:43.851 | 
~~~
  2017-10-16 02:45:43.853340 | primary | 2017-10-16 02:45:43.852 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.855053 | primary | 2017-10-16 02:45:43.854 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_lbaasv2.py", line 
109, in test_create_update_loadbalancer
  2017-10-16 02:45:43.856727 | primary | 2017-10-16 02:45:43.856 | 
parameters=parameters)
  2017-10-16 02:45:43.858396 | primary | 2017-10-16 02:45:43.857 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 437, in 
update_stack
  2017-10-16 02:45:43.859969 | primary | 2017-10-16 02:45:43.859 | 
self._wait_for_stack_status(**kwargs)
  2017-10-16 02:45:43.861455 | primary | 2017-10-16 02:45:43.861 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 368, in 
_wait_for_stack_status
  2017-10-16 02:45:43.862957 | primary | 2017-10-16 02:45:43.862 | 
fail_regexp):
  2017-10-16 02:45:43.864506 | primary | 2017-10-16 02:45:43.864 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 327, in 
_verify_status
  2017-10-16 02:45:43.866142 | primary | 2017-10-16 02:45:43.865 | 
stack_status_reason=stack.stack_status_reason)
  2017-10-16 02:45:43.867842 | primary | 2017-10-16 02:45:43.867 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
LoadBalancerv2Test-1022777367/f0a78a75-c1ed-4921-a7f7-c4028f3f60c3 is in 
UPDATE_FAILED status due to 'Resource UPDATE failed: ResourceInError: 
resources.loadbalancer: Went to status ERROR due to "Unknown"'
  2017-10-16 02:45:43.869183 | primary | 2017-10-16 02:45:43.868 |
  2017-10-16 02:45:43.870571 | primary | 2017-10-16 02:45:43.870 |
  2017-10-16 02:45:43.872501 | primary | 2017-10-16 02:45:43.872 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.874213 | primary | 2017-10-16 02:45:43.873 | 

  2017-10-16 02:45:43.875784 | primary | 2017-10-16 02:45:43.875 |
  2017-10-16 02:45:43.877352 | primary | 2017-10-16 02:45:43.876 | Captured 
traceback:
  2017-10-16 02:45:43.878767 | primary | 2017-10-16 02:45:43.878 | 
~~~
  2017-10-16 02:45:43.880302 | primary | 2017-10-16 02:45:43.879 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.881941 | primary | 2017-10-16 02:45:43.881 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 97, in test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.883543 | primary | 2017-10-16 02:45:43.883 | 
self.check_num_responses(lb_url, 1)
  2017-10-16 02:45:43.884968 | primary | 2017-10-16 02:45:43.884 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 51, in check_num_responses
  2017-10-16 02:45:43.886354 | primary | 2017-10-16 02:45:43.885 | 
self.assertEqual(expected_num, len(resp))
  2017-10-16 02:45:43.887791 | primary | 2017-10-16 02:45:43.887 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
  2017-10-16 02:45:43.889172 | primary | 2017-10-16 02:45:43.888 | 
self.assertThat(observed, matcher, message)
  

[Yahoo-eng-team] [Bug 1665851] Re: Newton: Heat not validating images

2017-09-28 Thread Rico Lin
Mark it as Invalid since it seems already fixed. Please let us know
otherwise:)

** Changed in: heat
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1665851

Title:
  Newton: Heat not validating images

Status in Glance:
  New
Status in OpenStack Heat:
  Invalid

Bug description:
  I'm seeing this error when Heat validates the existence of an image:

  2017-02-18 00:44:03.777 7906 INFO heat.engine.resource 
[req-593e2bad-c87f-4308-8fe8-fe8652286201 - - - - -] Validating Server "server"
  2017-02-18 00:44:03.779 7906 DEBUG heat.engine.stack 
[req-593e2bad-c87f-4308-8fe8-fe8652286201 - - - - -] Property error: 
resources.server.properties.image: "cirros" does not validate glance.image 
(constraint not found) validate 
/usr/lib/python2.7/dist-packages/heat/engine/stack.py:825
  2017-02-18 00:44:03.783 7906 DEBUG oslo_messaging.rpc.server 
[req-593e2bad-c87f-4308-8fe8-fe8652286201 - - - - -] Expected exception during 
message handling () _process_incoming 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py:158

  The image, however, exists and is public:

  os@controller:~$ openstack image list
  +--+++
  | ID   | Name   | Status |
  +--+++
  | 7ab5d7aa-0d0d-4a38-bf05-03089f49d2d7 | cirros | active |
  +--+++

  I have been updating some componentes related to Tacker and Openstackclient, 
so I think one of those updates triggered the bug.
  Please let me know which information to recollect.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1665851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714017] Re: The 'Provide user data to instances' article was lost in the current Nova Documentation

2017-09-08 Thread Rico Lin
Not sure why heat was listed here, so mark it as an opinion for now. We
surely welcome any improvement with document, but it's not a bug IMO.

** Changed in: heat
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714017

Title:
  The 'Provide user data to instances' article was lost in the current
  Nova Documentation

Status in OpenStack Heat:
  Opinion
Status in OpenStack Compute (nova):
  Confirmed
Status in python-novaclient:
  New

Bug description:
  The 'Provide user data to instances' article was lost in the current
  Nova Documentation.

  There is one link to this article in Heat Documentation. We can see it
  on this page: https://docs.openstack.org/heat/latest/glossary.html in
  the 'User data' item. This link is called ' User data (OpenStack End
  User Guide)'.

  The article needs to be restored. I found the saved version on the
  Amazon website: https://ec2-54-66-129-240.ap-
  southeast-2.compute.amazonaws.com/httrack/docs/docs.openstack.org
  /user-guide/cli.html

  
  Thank you for attention!

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1714017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714416] Re: Incorrect response returned for invalid Accept header

2017-09-08 Thread Rico Lin
** Changed in: heat
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714416

Title:
  Incorrect response returned for invalid Accept header

Status in Cinder:
  Won't Fix
Status in Glance:
  Invalid
Status in OpenStack Heat:
  Won't Fix
Status in OpenStack Identity (keystone):
  New
Status in masakari:
  Won't Fix
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  As of now, when user passes 'Accept' header in request other than JSON
  and XML using curl command then it returns 200 OK response with json
  format data.

  In api-ref guide [1] also it's not clearly mentioned about what
  response it should return if invalid value for 'Accept' header is
  specified. IMO instead of 'HTTP 200 OK' it should return 'HTTP 406 Not
  Acceptable' response.

  Steps to reproduce:
   
  Request:
  curl -g -i -X GET 
http://controller/volume/v2/c72e66cc4f1341f381e0c2eb7b28b443/volumes/detail -H 
"User-Agent: python-cinderclient" -H "Accept: application/abc" -H 
"X-Auth-Token: cd85aff745ce4dc0a04f686b52cf7e4f"
   
   
  Response:
  HTTP/1.1 200 OK
  Date: Thu, 31 Aug 2017 07:12:18 GMT
  Server: Apache/2.4.18 (Ubuntu)
  x-compute-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Content-Type: application/json
  Content-Length: 2681
  x-openstack-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Connection: close
   
  [1] 
https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes-with-details

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1714416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703856] Re: 502 Bad gateway error on image-create

2017-07-18 Thread Rico Lin
We still require this fix, but we have a patch to work around this
issue. Let's try to fix this in Glance

** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1703856

Title:
  502 Bad gateway error on image-create

Status in Glance:
  Confirmed

Bug description:
  
  The glance code that I am using is from the upstream master branch (Pike) and 
I just pulled down the latest code this morning and still can reproduce this 
problem.

  Up until about 2 weeks ago, I was able to upload my database image
  into glance using this command:

  glance image-create --name 'Db 12.1.0.2' --file
  Oracle12201DBRAC_x86_64-xvdb.qcow2 --container-format bare --disk-
  format qcow2

  However, now it fails as follows:

   glance --debug  image-create --name 'Db 12.1.0.2' --file
  Oracle12201DBRAC_x86_64-xvdb.qcow2 --container-format bare --disk-
  format qcow2

  DEBUG:keystoneauth.session:REQ: curl -g -i -X GET 
http://172.16.35.10/identity -H "Accept: application/json" -H "User-Agent: 
glance keystoneauth1/2.21.0 python-requests/2.18.1 CPython/2.7.12"
  DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.16.35.10
  DEBUG:urllib3.connectionpool:http://172.16.35.10:80 "GET /identity HTTP/1.1" 
300 606
  DEBUG:keystoneauth.session:RESP: [300] Date: Wed, 12 Jul 2017 14:26:39 GMT 
Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token Content-Type: 
application/json Content-Length: 606 Connection: close 
  RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.8", "links": 
[{"href": "http://172.16.35.10/identity/v3/;, "rel": "self"}]}, {"status": 
"deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": 
"application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 
"id": "v2.0", "links": [{"href": "http://172.16.35.10/identity/v2.0/;, "rel": 
"self"}, {"href": "https://docs.openstack.org/;, "type": "text/html", "rel": 
"describedby"}]}]}}

  DEBUG:keystoneauth.identity.v3.base:Making authentication request to 
http://172.16.35.10/identity/v3/auth/tokens
  DEBUG:urllib3.connectionpool:Resetting dropped connection: 172.16.35.10
  DEBUG:urllib3.connectionpool:http://172.16.35.10:80 "POST 
/identity/v3/auth/tokens HTTP/1.1" 201 4893
  DEBUG:keystoneauth.identity.v3.base:{"token": {"is_domain": false, "methods": 
["password"], "roles": [{"id": "325205c52aba4b31801e2d71ec95483b", "name": 
"admin"}], "expires_at": "2017-07-12T15:26:40.00Z", "project": {"domain": 
{"id": "default", "name": "Default"}, "id": "4aa1233111e140b2a1e4ba170881f092", 
"name": "demo"}, "catalog": [{"endpoints": [{"url": 
"http://172.16.35.10/image;, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "0d10d85bc3ae4e13a49ed344fcf6f737"}], "type": 
"image", "id": "01c2acd1845d4dd28c5b69351fa0dbf3", "name": "glance"}, 
{"endpoints": [{"url": 
"http://172.16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092;, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"0fbba7f276e44921ba112edd1e157561"}, {"url": 
"http://172.16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092;, "interface": 
"internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"72abdff47e2940f09db32720b709d01f"}, {"url": "http://172
 .16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092", "interface": "admin", 
"region": "RegionOne", "region_id": "RegionOne", "id": 
"d2789811c71342d69d69e45c09268ebc"}], "type": "orchestration", "id": 
"343101b65cba48afafb5b70fcbae5c3d", "name": "heat"}, {"endpoints": [{"url": 
"http://172.16.35.10/compute/v2/4aa1233111e140b2a1e4ba170881f092;, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d7fe183ce05d46d986c7ec7600b583a5"}], "type": "compute_legacy", "id": 
"3d75e8b88ed14f95b162b5398acfde82", "name": "nova_legacy"}, {"endpoints": 
[{"url": "http://172.16.35.10:8082;, "interface": "admin", "region": 
"RegionOne", "region_id": "RegionOne", "id": 
"65e5e92c5646468583f033cfb05ae0cb"}, {"url": "http://172.16.35.10:8082;, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"8cbae4cbce354314aa5f2b5e5c4e4592"}, {"url": "http://172.16.35.10:8082;, 
"interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d761a53278654a
 c690fb56b42752c1a4"}], "type": "application-catalog", "id": 
"42038a7b5c744771842615613d21f2ba", "name": "murano"}, {"endpoints": [{"url": 
"http://172.16.35.10/identity;, "interface": "admin", "region": "RegionOne", 
"region_id": "RegionOne", "id": "4b5c6e820b9446f586be1f64da5ae2f6"}, {"url": 
"http://172.16.35.10/identity;, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "f6c18a74f19a4b728eeb5f3916dde7c1"}], "type": 
"identity", "id": 

[Yahoo-eng-team] [Bug 1692567] Re: can't create neutron port fixed_ip if subnet associated with segment

2017-07-05 Thread Rico Lin
Hi Harald Jensås, so I assume that we don't need this bug anymore?

** Changed in: heat
   Status: In Progress => Invalid

** Changed in: heat
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1692567

Title:
  can't create neutron port fixed_ip if subnet associated with segment

Status in heat:
  Incomplete
Status in neutron:
  New

Bug description:
  There doesn't seem to be a way to create a fixed_ip for an
  OS::Neutron::Port if the subnet is associated with a Neutron segment.

  For example, using this:
  resources:
instance_port:
  type: OS::Neutron::Port
  properties:
network: ctlplane
fixed_ips: [{"subnet": ctlplane-subnet0, "ip_address": 10.8.146.8}]
   
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
port: {get_resource: instance_port}

  If the subnet is NOT associated with a segment, I am able to create a
  stack with a Neutron port with 10.8.146.8 as expected.

  However, in this case the subnet is associated with a neutron segment:
  [stack@host01 ~]$ neutron subnet-show ctlplane-subnet0
  
+---++
  | Field | Value   
   |
  
+---++
  | allocation_pools  | {"start": "10.8.146.5", "end": "10.8.146.20"}   
   |
  | cidr  | 10.8.146.0/24   
   |
  | created_at| 2017-05-19T21:57:53Z
   |
  | description   | 
   |
  | dns_nameservers   | 
   |
  | enable_dhcp   | True
   |
  | gateway_ip| 10.8.146.1  
   |
  | host_routes   | {"destination": "169.254.169.254/32", "nexthop": 
"10.8.146.1"} |
  | id| 2510cb92-e3f7-4ef3-98a8-ba409c33406b
   |
  | ip_version| 4   
   |
  | ipv6_address_mode | 
   |
  | ipv6_ra_mode  | 
   |
  | name  | ctlplane-subnet0
   |
  | network_id| 5f93540c-b00e-42c7-b1a1-0560906d9a8d
   |
  | project_id| 08b43a05b88c4d4089355b3aba9dd8fb
   |
  | revision_number   | 2   
   |
  | segment_id| d5b2dc5d-ee11-4057-9481-fd28fab14b31
   |
  | service_types | 
   |
  | subnetpool_id | 
   |
  | tags  | 
   |
  | tenant_id | 08b43a05b88c4d4089355b3aba9dd8fb
   |
  | updated_at| 2017-05-19T21:57:53Z
   |
  
+---++

  [stack@host01 ~]$ openstack network segment show 
d5b2dc5d-ee11-4057-9481-fd28fab14b31
  +--+--+
  | Field| Value|
  +--+--+
  | description  | None |
  | id   | d5b2dc5d-ee11-4057-9481-fd28fab14b31 |
  | name | subnet0  |
  | network_id   | 5f93540c-b00e-42c7-b1a1-0560906d9a8d |
  | network_type | flat |
  | physical_network | ctlplane |
  | segmentation_id  | None |
  +--+--+

  The stack is created successfuly, however the neutron port has a fixed_ip 
from the allocation_pool (10.8.146.15, see below) not the defined fixed_ip in 
the template.
  [stack@host01 ~]$ heat stack-list
  
+--++-+--+--+
  | id   | stack_name | stack_status| 
creation_time| updated_time |
  

[Yahoo-eng-team] [Bug 1691885] Re: Updating Nova::Server with Neutron::Port resource fails

2017-07-05 Thread Rico Lin
** Changed in: heat
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691885

Title:
  Updating Nova::Server with Neutron::Port resource fails

Status in heat:
  Won't Fix
Status in neutron:
  New

Bug description:
  A Nova::Server resource that was created with an implicit port cannot
  be updated.

  If I first create the following resource:
  # template1.yaml
  resources:
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
ip_address: "192.168.24.10"

  And then try to run a stack update with a different ip_address:
  # template2.yaml
  resources:
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
ip_address: "192.168.24.20"

  This fails with the following error:
  RetryError: resources.my_ironic_instance: RetryError[]

  I also tried assigning an external IP to the Nova::Server created in the 
template1.yaml, but that gave me the same error.
  # template3.yaml
  resources:
instance_port:
  type: OS::Neutron::Port
  properties:
network: ctlplane
fixed_ips:
  - subnet: "ctlplane-subnet"
ip_address: "192.168.24.20"

my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
port: {get_resource: instance_port}

  However, if I first create the Nova::Server resource with an external
  port specified (as in template3.yaml above), then I can update the
  port to a different IP address and Ironic/Neutron does the right thing
  (at least since the recent attach/detach VIF in Ironic code has
  merged). So it appears that you can update a port if the port was
  created externally, but not if the port was created as part of the
  Nova::Server resource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1691885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698290] [NEW] detaching network adapter failed

2017-06-16 Thread Rico Lin
Public bug reported:

In heat gate job (gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial) 
We found error within N-cpu when it raise detaching network adapter failed
the error log can be found in [1].
This happens in test [2] and we try to update server


[1] http://logs.openstack.org/07/474707/3/check/gate-heat-dsvm-
functional-convg-mysql-lbaasv2-ubuntu-
xenial/55ca8fb/logs/screen-n-cpu.txt.gz?level=ERROR

[2] http://logs.openstack.org/07/474707/3/check/gate-heat-dsvm-
functional-convg-mysql-lbaasv2-ubuntu-
xenial/55ca8fb/console.html#_2017-06-16_01_38_47_871446

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1698290

Title:
  detaching network adapter failed

Status in OpenStack Compute (nova):
  New

Bug description:
  In heat gate job 
(gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial) 
  We found error within N-cpu when it raise detaching network adapter failed
  the error log can be found in [1].
  This happens in test [2] and we try to update server


  
  [1] 
http://logs.openstack.org/07/474707/3/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial/55ca8fb/logs/screen-n-cpu.txt.gz?level=ERROR

  [2] http://logs.openstack.org/07/474707/3/check/gate-heat-dsvm-
  functional-convg-mysql-lbaasv2-ubuntu-
  xenial/55ca8fb/console.html#_2017-06-16_01_38_47_871446

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1698290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667259] Re: one more pool is created for a loadbalancer

2017-04-11 Thread Rico Lin
Let's plan something deeper from heat side if neutron can't solve this
issue at the end(which will not happen)

** Changed in: heat
   Status: New => Won't Fix

** Changed in: heat
 Assignee: Rico Lin (rico-lin) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667259

Title:
  one more pool is created for a loadbalancer

Status in heat:
  Won't Fix
Status in neutron:
  New

Bug description:
  One more pool is created when creating a load balancer with two pools.
  That pool doesn't have complete information but related to that
  loadblancer, which caused failure when deleting loadbalancer.

  heat resource-list lbvd
  WARNING (shell) "heat resource-list" is deprecated, please use "openstack 
stack resource list" instead
  
+---+--+---+-+--+
  | resource_name | physical_resource_id | resource_type
 | resource_status | updated_time |
  
+---+--+---+-+--+
  | listener  | 12dfe005-80e0-4439-a4f8-1333f688e73b | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | listener2 | 26ba1151-3d4b-4732-826b-7f318800070d | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | loadbalancer  | 3a5bfa24-220c-4316-9c3d-57dd9c13feb8 | 
OS::Neutron::LBaaS::LoadBalancer  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor   | 241bc328-4c9b-4f58-a34a-4e25ed7431ea | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor2  | 6592b768-f3be-4ff9-bbf4-2c30b94f98e2 | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool2 | fae40172-7f16-4b1a-93f0-877d404fe466 | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  
+---+--+---+-+--+

  
  neutron lbaas-pool-list | grep lbvd
  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81 | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | fae40172-7f16-4b1a-93f0-877d404fe466 | lbvd-pool2-kn7rlwltbdxh  
  | HTTPS| True  |

  
  neutron lbaas-pool-show 095c94b8-8c18-443f-9ce9-3d34e94f0c81
  +-++
  | Field  | Value  |
  +-++
  | admin_state_up  | True  |
  | description||
  | healthmonitor_id||
  | id  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81  |
  | lb_algorithm| ROUND_ROBIN|
  | listeners  ||
  | loadbalancers  | {"id": "3a5bfa24-220c-4316-9c3d-57dd9c13feb8"} |
  | members||
  | name| lbvd-pool-ujtp6ddt4g6o|
  | protocol| HTTP  |
  | session_persistence ||
  | tenant_id  | 3dcf8b12327c460a966c1c1d4a6e2887  |
  +-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1667259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667259] Re: one more pool is created for a loadbalancer

2017-02-24 Thread Rico Lin
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667259

Title:
  one more pool is created for a loadbalancer

Status in heat:
  New
Status in neutron:
  New

Bug description:
  One more pool is created when creating a load balancer with two pools.
  That pool doesn't have complete information but related to that
  loadblancer, which caused failure when deleting loadbalancer.

  heat resource-list lbvd
  WARNING (shell) "heat resource-list" is deprecated, please use "openstack 
stack resource list" instead
  
+---+--+---+-+--+
  | resource_name | physical_resource_id | resource_type
 | resource_status | updated_time |
  
+---+--+---+-+--+
  | listener  | 12dfe005-80e0-4439-a4f8-1333f688e73b | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | listener2 | 26ba1151-3d4b-4732-826b-7f318800070d | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | loadbalancer  | 3a5bfa24-220c-4316-9c3d-57dd9c13feb8 | 
OS::Neutron::LBaaS::LoadBalancer  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor   | 241bc328-4c9b-4f58-a34a-4e25ed7431ea | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor2  | 6592b768-f3be-4ff9-bbf4-2c30b94f98e2 | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool2 | fae40172-7f16-4b1a-93f0-877d404fe466 | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  
+---+--+---+-+--+

  
  neutron lbaas-pool-list | grep lbvd
  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81 | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | fae40172-7f16-4b1a-93f0-877d404fe466 | lbvd-pool2-kn7rlwltbdxh  
  | HTTPS| True  |

  
  neutron lbaas-pool-show 095c94b8-8c18-443f-9ce9-3d34e94f0c81
  +-++
  | Field  | Value  |
  +-++
  | admin_state_up  | True  |
  | description||
  | healthmonitor_id||
  | id  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81  |
  | lb_algorithm| ROUND_ROBIN|
  | listeners  ||
  | loadbalancers  | {"id": "3a5bfa24-220c-4316-9c3d-57dd9c13feb8"} |
  | members||
  | name| lbvd-pool-ujtp6ddt4g6o|
  | protocol| HTTP  |
  | session_persistence ||
  | tenant_id  | 3dcf8b12327c460a966c1c1d4a6e2887  |
  +-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1667259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646309] [NEW] Image metadata will infact other image after reuse image name

2016-11-30 Thread Rico Lin
Public bug reported:

Issue:
Not sure if this design belong to glance or horizon. When you create two image 
(img_1, and img_2), and you descide that img_2 should use the name `img_1` 
instead. Now img_1 -> img_x, img_2 -> img_1.
Now the interesting part is, this new img_1 will reuse the metadata of old 
img_1 if it not specify any.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1646309

Title:
  Image metadata will infact other image after reuse image name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Issue:
  Not sure if this design belong to glance or horizon. When you create two 
image (img_1, and img_2), and you descide that img_2 should use the name 
`img_1` instead. Now img_1 -> img_x, img_2 -> img_1.
  Now the interesting part is, this new img_1 will reuse the metadata of old 
img_1 if it not specify any.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1646309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-16 Thread Rico Lin
** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) => Rico Lin (rico-lin)

** Also affects: python-heatclient
   Importance: Undecided
   Status: New

** Changed in: python-heatclient
 Assignee: (unassigned) => Rico Lin (rico-lin)

** Changed in: heat
   Importance: Undecided => Low

** Changed in: heat
Milestone: None => ocata-2

** Changed in: python-heatclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in kuryr:
  In Progress
Status in kuryr-libnetwork:
  In Progress
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-heatclient:
  New
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620226] Re: Wrong cinder quota value been accepted

2016-09-05 Thread Rico Lin
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1620226

Title:
  Wrong cinder quota value been accepted

Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While testing cinder quota in heat resource, 
  find out we have accept wrong quota value and added it to cinder.
  For example, if we already have 2 volumes in project, we then update quota to 
accept 1 volume.
  That means quotas will not consider the real fact in heat (and to limit the 
real fact should be the only good quota provided).
  Consider that horizon already precheck this kind of mistake, we should add 
check to prevent that.

  On further test with horizon, also find out that horizon should not
  use only total volumes size to validate gigabytes quota. Gigabytes
  refer to total size of volumes and snapshots. That's what cinder react
  as well(if you set gigabytes equal to total volumes size, it will
  raise error when creating snapshots).

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1620226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587240] [NEW] Orchestration resources out of scope

2016-05-30 Thread Rico Lin
Public bug reported:

When we have large number of resources, horizon can't appear to show all
resource

With a screen shot to show the case, we need to have way to present all
resources properly in console.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: heat

** Attachment added: "resource_out_of_scope.jpg"
   
https://bugs.launchpad.net/bugs/1587240/+attachment/4673266/+files/resource_out_of_scope.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587240

Title:
  Orchestration resources out of scope

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When we have large number of resources, horizon can't appear to show
  all resource

  With a screen shot to show the case, we need to have way to present
  all resources properly in console.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522307] [NEW] Disk usage not work for shared storage

2015-12-03 Thread Rico Lin
Public bug reported:

We use a 50TB Ceph as backend but when we showing the hypervisor summary It 
shows double size of it (100TB).
The cause of this is that Horizon didn't know storage backend for these two 
hypervisors are shared.

The screen capture is attached fallowing

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "error-disk-show-in-hypervisor-summary.png"
   
https://bugs.launchpad.net/bugs/1522307/+attachment/4528872/+files/error-disk-show-in-hypervisor-summary.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1522307

Title:
  Disk usage not work for shared storage

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We use a 50TB Ceph as backend but when we showing the hypervisor summary It 
shows double size of it (100TB).
  The cause of this is that Horizon didn't know storage backend for these two 
hypervisors are shared.

  The screen capture is attached fallowing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1522307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp