[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-16 Thread Takashi Kajinami
** Also affects: taskflow
   Importance: Undecided
   Status: New

** Changed in: taskflow
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  New
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in taskflow:
  In Progress
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-16 Thread Takashi Kajinami
** Also affects: mistral
   Importance: Undecided
   Status: New

** Also affects: manila
   Importance: Undecided
   Status: New

** Also affects: zaqar
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Shared File Systems Service (Manila):
  New
Status in Mistral:
  New
Status in neutron:
  In Progress
Status in zaqar:
  New

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/manila/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936278] [NEW] nova-manage placement audit command always fails with --resource_provider option

2021-07-14 Thread Takashi Kajinami
Public bug reported:

Description
===
nova-mange placement audit command always fails because the target resource 
provider is not found
~~~
# nova-manage placement audit --resource_provider 
dbba1bf7-6e97-4dcb-8d4c-37fe36633358 --verbose
Resource provider with UUID dbba1bf7-6e97-4dcb-8d4c-37fe36633358 does not exist.
Error: non zero exit code: 127: OCI runtime error
~~~

However the resource provider record actually exists in Placement.

~~~
(overcloud) [stack@undercloud-0 ~]$ openstack resource provider list --uuid 
dbba1bf7-6e97-4dcb-8d4c-37fe36633358 --os-placement-api-version 1.14
+--+++--+--+
| uuid | name   | generation | 
root_provider_uuid   | parent_provider_uuid |
+--+++--+--+
| dbba1bf7-6e97-4dcb-8d4c-37fe36633358 | compute-0.redhat.local | 28 | 
dbba1bf7-6e97-4dcb-8d4c-37fe36633358 | None |
+--+++--+--+
~~~

Looking at placement access log, the command is sending request with
= instead of ?uuid=

~~~
2021-07-15 02:02:28.101 24 INFO placement.requestlog 
[req-a9d4942e-411a-4901-b05e-98ec1239ef70 31e1d736a759444f92346daf932243ff 
91063c94413548b695edc6a0ef1f1252 - default default] 172.17.1.24 "GET 
/placement/resource_providers=dbba1bf7-6e97-4dcb-8d4c-37fe36633358" 
status: 404 len: 162 microversion: 1.14
~~~

Steps to reproduce
==
- Look up existing resource provider uuid
 $ openstack resource provider list
- Run audit command with one of the ids specified
 $ nova-manage placement audit --resource_provider 

Expected result
===
The command succeeds without error

Actual result
=
The command fails with the following error

Resource provider with UUID  does not exist.

Environment
===
This issue was initially found in the following downstream bug.
 https://bugzilla.redhat.com/show_bug.cgi?id=1982485

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1936278

Title:
  nova-manage placement audit command always fails with
  --resource_provider option

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  nova-mange placement audit command always fails because the target resource 
provider is not found
  ~~~
  # nova-manage placement audit --resource_provider 
dbba1bf7-6e97-4dcb-8d4c-37fe36633358 --verbose
  Resource provider with UUID dbba1bf7-6e97-4dcb-8d4c-37fe36633358 does not 
exist.
  Error: non zero exit code: 127: OCI runtime error
  ~~~

  However the resource provider record actually exists in Placement.

  ~~~
  (overcloud) [stack@undercloud-0 ~]$ openstack resource provider list --uuid 
dbba1bf7-6e97-4dcb-8d4c-37fe36633358 --os-placement-api-version 1.14
  
+--+++--+--+
  | uuid | name   | generation 
| root_provider_uuid   | parent_provider_uuid |
  
+--+++--+--+
  | dbba1bf7-6e97-4dcb-8d4c-37fe36633358 | compute-0.redhat.local | 28 
| dbba1bf7-6e97-4dcb-8d4c-37fe36633358 | None |
  
+--+++--+--+
  ~~~

  Looking at placement access log, the command is sending request with
  = instead of ?uuid=

  ~~~
  2021-07-15 02:02:28.101 24 INFO placement.requestlog 
[req-a9d4942e-411a-4901-b05e-98ec1239ef70 31e1d736a759444f92346daf932243ff 
91063c94413548b695edc6a0ef1f1252 - default default] 172.17.1.24 "GET 
/placement/resource_providers=dbba1bf7-6e97-4dcb-8d4c-37fe36633358" 
status: 404 len: 162 microversion: 1.14
  ~~~

  Steps to reproduce
  ==
  - Look up existing resource provider uuid
   $ openstack resource provider list
  - Run audit command with one of the ids specified
   $ nova-manage placement audit --resource_provider 

  Expected result
  ===
  The command succeeds without error

  Actual result
  =
  The command fails with the following error

  Resource provider with UUID  does not exist.

  Environment
  ===
  This issue was initially found in the following downstream bug.
   

[Yahoo-eng-team] [Bug 1930499] [NEW] Can't add a subnet without gateway ip to a router

2021-06-01 Thread Takashi Kajinami
Public bug reported:

Currently when adding an interface to a router, Horizon shows subnets
with gateway ip.

On the other hand, Horizon supports attaching a subnet to a router with fixed 
interface ip.
In this case gateway ip is not required but a new port is created with a given 
ip in the subnet and the port is attached to the router.
This is useful especially when users want to connect an instance to multiple 
subnets with a single default gateway (only one subnet has gateway ip while the 
other subnet don't).

However because of the current logic to filter out subnets without
gateway ip, we can't add a subnet without gateway ip to a router using
fixed ip now.

** Affects: horizon
 Importance: Undecided
 Status: In Progress

** Description changed:

  Currently when adding an interface to a router, Horizon shows subnets
  with gateway ip.
  
  On the other hand, Horizon supports attaching a subnet to a router with fixed 
interface ip.
- In this case gateway ip is not required but a new port is created with a 
given ip in the subnet
- and the port is attached to the router.
+ In this case gateway ip is not required but a new port is created with a 
given ip in the subnet and the port is attached to the router.
  This is useful especially when users want to connect an instance to multiple 
subnets with a single default gateway (only one subnet has gateway ip while the 
other subnet don't).
  
- However because of the current logic to filter out subnets without gateway 
ip, we can't add
- a subnet without gateway ip to a router using fixed ip now.
+ However because of the current logic to filter out subnets without
+ gateway ip, we can't add a subnet without gateway ip to a router using
+ fixed ip now.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1930499

Title:
  Can't add a subnet without gateway ip to a router

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently when adding an interface to a router, Horizon shows subnets
  with gateway ip.

  On the other hand, Horizon supports attaching a subnet to a router with fixed 
interface ip.
  In this case gateway ip is not required but a new port is created with a 
given ip in the subnet and the port is attached to the router.
  This is useful especially when users want to connect an instance to multiple 
subnets with a single default gateway (only one subnet has gateway ip while the 
other subnet don't).

  However because of the current logic to filter out subnets without
  gateway ip, we can't add a subnet without gateway ip to a router using
  fixed ip now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1930499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1927494] [NEW] [designate] admin_* options are based on v2 identity instead of v3

2021-05-07 Thread Takashi Kajinami
Public bug reported:

Currently neutron accepts the following parameters to set up identity
for connecting to designate in admin context, but the list doesn't
include domain parameters and it seems these parameters are based on
keystone v2 indentity instead of keystone v3 identity.

- admin_username
- admin_password
- admin_tenant_id
- admin_tenant_name
- admin_auth_url'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1927494

Title:
  [designate] admin_* options are based on v2 identity instead of v3

Status in neutron:
  New

Bug description:
  Currently neutron accepts the following parameters to set up identity
  for connecting to designate in admin context, but the list doesn't
  include domain parameters and it seems these parameters are based on
  keystone v2 indentity instead of keystone v3 identity.

  - admin_username
  - admin_password
  - admin_tenant_id
  - admin_tenant_name
  - admin_auth_url'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1927494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905276] Re: Overriding hypervisor name for resource provider always requires a complete list of interfaces/bridges

2021-04-30 Thread Takashi Kajinami
Closing this because our expectation here is that neutron and libvirt should 
detect the same hostname.
I've reported another bug to fix current incompatibility.

 https://bugs.launchpad.net/neutron/+bug/1926693

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905276

Title:
  Overriding hypervisor name for resource provider always requires a
  complete list of interfaces/bridges

Status in neutron:
  Invalid

Bug description:
  In some deployments, hostnames can be different between hosts and resource 
provider records. For example in deployment managed by TripleO we use short 
host name (without domain name) in host, while we use FQDN for resource 
provider records (this comes from the FQDN set to the host
  option in nova.conf).

  This causes an issue with the way how currently neutron looks up the
  root resource provider because placment API requires the exact
  hostname and doesn't automatically translate short name and FQDN.

  To fix the issue we need to set the resource_provider_hypervisors
  option[1] now but it is very redundant to list up all devices or
  bridges in this option to override hostnames for the devices/bridges
  by the same value.

  [1] https://review.opendev.org/#/c/696600/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1926693] [NEW] Logic to obtain hypervisor hostname is not completely compatible with libvirt

2021-04-29 Thread Takashi Kajinami
Public bug reported:

Currently we use socket.getshostname() to determine the heyervisor
hostname assuming that is what is used in libvirt.

However libvirt actually ues the following steps to generate hypervisor
hostname and socket.gethostname() doesn't match in some cases.

/* Who knew getting a hostname could be so delicate.  In Linux (and Unices
 * in general), many things depend on "hostname" returning a value that will
 * resolve one way or another.  In the modern world where networks frequently
 * come and go this is often being hard-coded to resolve to "localhost".  If
 * it *doesn't* resolve to localhost, then we would prefer to have the FQDN.
 * That leads us to 3 possibilities:
 *
 * 1)  gethostname() returns an FQDN (not localhost) - we return the string
 * as-is, it's all of the information we want
 * 2)  gethostname() returns "localhost" - we return localhost; doing further
 * work to try to resolve it is pointless
 * 3)  gethostname() returns a shortened hostname - in this case, we want to
 * try to resolve this to a fully-qualified name.  Therefore we pass it
 * to getaddrinfo().  There are two possible responses:
 * a)  getaddrinfo() resolves to a FQDN - return the FQDN
 * b)  getaddrinfo() fails or resolves to localhost - in this case, the
 * data we got from gethostname() is actually more useful than what
 * we got from getaddrinfo().  Return the value from gethostname()
 * and hope for the best.
 */

https://github.com/libvirt/libvirt/blob/ec2e3336b8c8df572600043976e1ab5feead656e/src/util/virutil.c#L454-L531

Acutally we do see that in TripleO deployment socket.gethostname() and virsh 
hostname doesn't agree, and this causes an issue when we set up 
resource_provider_bandwidths
~~~
[heat-admin@compute-0 ~]$ python
Python 3.6.8 (default, Dec  5 2019, 15:45:45) 
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import socket
>>> socket.gethostname()
'compute-0'
>>> exit()
[heat-admin@compute-0 ~]$ sudo podman exec -it nova_libvirt virsh hostname
compute-0.redhat.local
~~~

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1926693

Title:
  Logic to obtain hypervisor hostname is not completely compatible with
  libvirt

Status in neutron:
  New

Bug description:
  Currently we use socket.getshostname() to determine the heyervisor
  hostname assuming that is what is used in libvirt.

  However libvirt actually ues the following steps to generate
  hypervisor hostname and socket.gethostname() doesn't match in some
  cases.

  /* Who knew getting a hostname could be so delicate.  In Linux (and Unices
   * in general), many things depend on "hostname" returning a value that will
   * resolve one way or another.  In the modern world where networks frequently
   * come and go this is often being hard-coded to resolve to "localhost".  If
   * it *doesn't* resolve to localhost, then we would prefer to have the FQDN.
   * That leads us to 3 possibilities:
   *
   * 1)  gethostname() returns an FQDN (not localhost) - we return the string
   * as-is, it's all of the information we want
   * 2)  gethostname() returns "localhost" - we return localhost; doing further
   * work to try to resolve it is pointless
   * 3)  gethostname() returns a shortened hostname - in this case, we want to
   * try to resolve this to a fully-qualified name.  Therefore we pass it
   * to getaddrinfo().  There are two possible responses:
   * a)  getaddrinfo() resolves to a FQDN - return the FQDN
   * b)  getaddrinfo() fails or resolves to localhost - in this case, the
   * data we got from gethostname() is actually more useful than what
   * we got from getaddrinfo().  Return the value from gethostname()
   * and hope for the best.
   */

  
https://github.com/libvirt/libvirt/blob/ec2e3336b8c8df572600043976e1ab5feead656e/src/util/virutil.c#L454-L531

  Acutally we do see that in TripleO deployment socket.gethostname() and virsh 
hostname doesn't agree, and this causes an issue when we set up 
resource_provider_bandwidths
  ~~~
  [heat-admin@compute-0 ~]$ python
  Python 3.6.8 (default, Dec  5 2019, 15:45:45) 
  [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import socket
  >>> socket.gethostname()
  'compute-0'
  >>> exit()
  [heat-admin@compute-0 ~]$ sudo podman exec -it nova_libvirt virsh hostname
  compute-0.redhat.local
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1926693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1920842] [NEW] "rpc_response_max_timeout" configuration variable not present in metadata-agent

2021-03-22 Thread Takashi Kajinami
Public bug reported:

We have observed that the following Traceback is logged in metadata-
agent.log when metadata-agent encounters MessagingTimeout.

~~~
Traceback (most recent call last):
  File 
"/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
397, in get
return self._queues[msg_id].get(block=True, timeout=timeout)
  File "/usr/lib/python3.6/site-packages/eventlet/queue.py", line 322, in get
return waiter.wait()
  File "/usr/lib/python3.6/site-packages/eventlet/queue.py", line 141, in wait
return get_hub().switch()
  File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 298, in 
switch
return self.greenlet.switch()
queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/neutron_lib/rpc.py", line 157, in call
return self._original_context.call(ctxt, method, **kwargs)
  File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 
181, in call
transport_options=self.transport_options)
  File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 
129, in _send
transport_options=transport_options)
  File 
"/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
646, in send
transport_options=transport_options)
  File 
"/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
634, in _send
call_monitor_timeout)
  File 
"/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
523, in wait
message = self.waiters.get(msg_id, timeout=timeout)
  File 
"/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
401, in get
'to message ID %s' % msg_id)
oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to 
message ID 
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2197, in 
__getattr__
return self._get(name)
  File "/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2631, in _get
value, loc = self._do_get(name, group, namespace)
  File "/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2649, in 
_do_get
info = self._get_opt_info(name, group)
  File "/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2849, in 
_get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option rpc_response_max_timeout in 
group [DEFAULT]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 
88, in __call__
instance_id, tenant_id = self._get_instance_and_tenant_id(req)
  File "/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 
184, in _get_instance_and_tenant_id
skip_cache=skip_cache)
  File "/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 
169, in _get_ports
skip_cache=skip_cache)
  File "/usr/lib/python3.6/site-packages/neutron/common/cache_utils.py", line 
122, in __call__
return self.func(target_self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 
146, in _get_ports_for_remote_address
ip_address=remote_address)
  File "/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 
113, in _get_ports_from_server
return self.plugin_rpc.get_ports(self.context, filters)
  File "/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 
71, in get_ports
return cctxt.call(context, 'get_ports', filters=filters)
  File "/usr/lib/python3.6/site-packages/neutron_lib/rpc.py", line 173, in call
self._original_context.timeout * 2, self.get_max_timeout())
  File "/usr/lib/python3.6/site-packages/neutron_lib/rpc.py", line 133, in 
get_max_timeout
return cls._max_timeout or _get_rpc_response_max_timeout()
  File "/usr/lib/python3.6/site-packages/neutron_lib/rpc.py", line 95, in 
_get_rpc_response_max_timeout
return TRANSPORT.conf.rpc_response_max_timeout
  File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", 
line 508, in __getattr__
value = getattr(self._conf, name)
  File "/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2201, in 
__getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option rpc_response_max_timeout in 
group [DEFAULT]
~~~

A similar issue has been reported in
https://bugs.launchpad.net/neutron/+bug/1880934 and fixed already. We
need the same fix for metadata-agent.

Note: This issue was found in stable/train deployment

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1920842

Title:
  

[Yahoo-eng-team] [Bug 1917483] [NEW] Horizon loads policy files partially when one of policy files are written in a wrong format

2021-03-02 Thread Takashi Kajinami
Public bug reported:

Currently horizon stops loading policy rules when it detects any file in a 
wrong format.
This results in a situation where horizon works with very incomplete policy 
rules because of only one invalid policy file.
For example when horizon susceeds to load the keystone policy firstly and then 
fails to load the nova policy secondly, it recognizes policy rules for only 
keystone and doesn't recognize policy rules for not only nova but the remaining 
services like neutron, cinder and so on.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1917483

Title:
  Horizon loads policy files partially when one of policy files are
  written in a wrong format

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently horizon stops loading policy rules when it detects any file in a 
wrong format.
  This results in a situation where horizon works with very incomplete policy 
rules because of only one invalid policy file.
  For example when horizon susceeds to load the keystone policy firstly and 
then fails to load the nova policy secondly, it recognizes policy rules for 
only keystone and doesn't recognize policy rules for not only nova but the 
remaining services like neutron, cinder and so on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1917483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905276] [NEW] Overriding hypervisor name for resource provider always requires a complete list of interfaces/bridges

2020-11-23 Thread Takashi Kajinami
Public bug reported:

In some deployments, hostnames can be different between hosts and resource 
provider records. For example in deployment managed by TripleO we use short 
host name (without domain name) in host, while we use FQDN for resource 
provider records (this comes from the FQDN set to the host
option in nova.conf).

This causes an issue with the way how currently neutron looks up the
root resource provider because placment API requires the exact hostname
and doesn't automatically translate short name and FQDN.

To fix the issue we need to set the resource_provider_hypervisors
option[1] now but it is very redundant to list up all devices or bridges
in this option to override hostnames for the devices/bridges by the same
value.

[1] https://review.opendev.org/#/c/696600/

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Overriding defualt hypervisor name always requires a complete list of 
interfaces/bridges
+ Overriding hypervisor name for resource provider management always requires a 
complete list of interfaces/bridges

** Summary changed:

- Overriding hypervisor name for resource provider management always requires a 
complete list of interfaces/bridges
+ Overriding hypervisor name for resource provider always requires a complete 
list of interfaces/bridges

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905276

Title:
  Overriding hypervisor name for resource provider always requires a
  complete list of interfaces/bridges

Status in neutron:
  New

Bug description:
  In some deployments, hostnames can be different between hosts and resource 
provider records. For example in deployment managed by TripleO we use short 
host name (without domain name) in host, while we use FQDN for resource 
provider records (this comes from the FQDN set to the host
  option in nova.conf).

  This causes an issue with the way how currently neutron looks up the
  root resource provider because placment API requires the exact
  hostname and doesn't automatically translate short name and FQDN.

  To fix the issue we need to set the resource_provider_hypervisors
  option[1] now but it is very redundant to list up all devices or
  bridges in this option to override hostnames for the devices/bridges
  by the same value.

  [1] https://review.opendev.org/#/c/696600/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898553] Re: nova-compute doesn't start due to libvirtd --listen restriction

2020-10-05 Thread Takashi Kajinami
I think this is not an issue with nova but the one with puppet-nova.

Actually we fixed the configration related to libvirt on CentOS while we
fixed that mentioned bug, but we still need the same fix for Ubuntu
which has libvirt 6.0 recently.

We already have a proposed fix for the issue
https://review.opendev.org/#/c/749603 so I think we can move this
forward to address the issue.

** Also affects: puppet-nova
   Importance: Undecided
   Status: New

** Changed in: puppet-nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1898553

Title:
  nova-compute doesn't start due to libvirtd --listen restriction

Status in OpenStack Compute (nova):
  New
Status in puppet-nova:
  In Progress

Bug description:
  hello,

  after upgrading from Openstack Train to Ussuri, running on Ubuntu
  18.04, the nova-compute service doesn't start due to the libvirtd
  service not starting because of the --listen option being enabled in
  the /etc/default/libvirtd file and so, the nova-compute service fail
  to reach the socket that doesn't exist.

  i have found other bug report and patch that claim to have fixed this
  issue but it's not, at least not on the package installed version of
  openstack on ubuntu 18.04 (https://bugs.launchpad.net/puppet-
  nova/+bug/1880619)

  is nova-compute able to use another mechanism to connect to libvirtd ?
  and thus not needing the --listen option ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1898553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860913] [NEW] Instance uses base image file when it is rebooted after snapshot creation if cinder nfs backend is used

2020-01-26 Thread Takashi Kajinami
Public bug reported:

Description
===
When we use nfs backend in cinder and attach a cinder volume to an instance, 
the instance access to the file in nfs share, which is named like 
volume-.

When the volume is attached to an instance and we take snapshot with
"openstack volume snapshot create  --force", it will create the
following 3 files in nfs share.

 (1) volume-
   base image freezed when taking snapshot

 (2) volume--
  diff image where instance should write into after taking snapshot

 (3) volume-.info
   json file to manage active snapshot

As described above, after taking snapshot, the instance should write into (2) 
volume-- .
It works just after taking snapshot, but if we stop and start the instance, the 
instance starts to write into (1) volume-, which it should not 
modify.

Steps to reproduce
==
1. Create a volume in cinder nfs backend
2. Create a bfv instance with the volume
3. Take snapshot of the volume
4. Stop and Start the instance

Expected result
===
The instance keeps writing into volume--

Actual result
=
The instance writes into volume-

Environment
===
I reproduced the issue with Queens release with
 nova: libvirt driver
 cinder: nfs backed, with nfs_snapshot_support=True

As far as I see the implementation about file path handling, I don't see
any changes in the way how we handle disk file path for nfs backend, so
the problem should be reproduced with master.

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860913

Title:
  Instance uses base image file when it is rebooted after snapshot
  creation if cinder nfs backend is used

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When we use nfs backend in cinder and attach a cinder volume to an instance, 
the instance access to the file in nfs share, which is named like 
volume-.

  When the volume is attached to an instance and we take snapshot with
  "openstack volume snapshot create  --force", it will create
  the following 3 files in nfs share.

   (1) volume-
 base image freezed when taking snapshot

   (2) volume--
diff image where instance should write into after taking snapshot

   (3) volume-.info
 json file to manage active snapshot

  As described above, after taking snapshot, the instance should write into (2) 
volume-- .
  It works just after taking snapshot, but if we stop and start the instance, 
the instance starts to write into (1) volume-, which it should not 
modify.

  Steps to reproduce
  ==
  1. Create a volume in cinder nfs backend
  2. Create a bfv instance with the volume
  3. Take snapshot of the volume
  4. Stop and Start the instance

  Expected result
  ===
  The instance keeps writing into volume--

  Actual result
  =
  The instance writes into volume-

  Environment
  ===
  I reproduced the issue with Queens release with
   nova: libvirt driver
   cinder: nfs backed, with nfs_snapshot_support=True

  As far as I see the implementation about file path handling, I don't
  see any changes in the way how we handle disk file path for nfs
  backend, so the problem should be reproduced with master.

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1860913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843883] [NEW] Can't configure extra parameters for live migration uri in libvirt

2019-09-13 Thread Takashi Kajinami
Public bug reported:

In libvirt we can add some extra parameters into migration target uri in query 
parameter format.[1]
 [1] https://libvirt.org/remote.html
This is still vlaid for some usecases. For example in TripleO we define a path 
for key file to authenticate to the remote node by keyfile parameter. 

However, since live_migration_uri was deprecated and replaced by 
live_migration_scheme, there are no available methods to configure this without 
using the deprecated parameter live_migration_uri.
We need some additional option like live_migration_extraparams which can be 
used to these define parameters, Otherwise it can break existing deployment 
which requires extra parameters configured.

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1843883

Title:
  Can't configure extra parameters for live migration uri in libvirt

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In libvirt we can add some extra parameters into migration target uri in 
query parameter format.[1]
   [1] https://libvirt.org/remote.html
  This is still vlaid for some usecases. For example in TripleO we define a 
path for key file to authenticate to the remote node by keyfile parameter. 

  However, since live_migration_uri was deprecated and replaced by 
live_migration_scheme, there are no available methods to configure this without 
using the deprecated parameter live_migration_uri.
  We need some additional option like live_migration_extraparams which can be 
used to these define parameters, Otherwise it can break existing deployment 
which requires extra parameters configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1843883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


<    1   2