[Yahoo-eng-team] [Bug 1910504] [NEW] get_host_uptime missing for bare metal

2021-01-07 Thread gtaga
Public bug reported:

I add baremetal but on hypervisor it doesn't show the resources.

Install with kolla. 
Ubuntu: 18.04
Docker pkg: Source

I check the logs i can see from nova/nova-compute-ironic.log this:

ERROR oslo_messaging.rpc.server [req-74438713-f718-4c8f-a274-d792f17bea09 
e9cec66166ea406eac19b3aa528f5669 ffab07a846cd4ca78845ff7c51b31f46 - default 
default] Exception during message handling: NotImplementedError
Traceback (most recent call last):
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", 
line 165, in _process_incoming
res = self.dispatcher.dispatch(message)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 276, in dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 196, in _do_dispatch
result = func(ctxt, **new_args)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/exception_wrapper.py", 
line 79, in wrapped
function_name, call_dict, binary, tb)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
raise value
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/exception_wrapper.py", 
line 69, in wrapped
return f(self, context, *args, **kw)
  File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 
6245, in get_host_uptime
return self.driver.get_host_uptime()
  File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py", 
line 1460, in get_host_uptime
raise NotImplementedError()
NotImplementedError

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1910504

Title:
  get_host_uptime missing for bare metal

Status in OpenStack Compute (nova):
  New

Bug description:
  I add baremetal but on hypervisor it doesn't show the resources.

  Install with kolla. 
  Ubuntu: 18.04
  Docker pkg: Source

  I check the logs i can see from nova/nova-compute-ironic.log this:

  ERROR oslo_messaging.rpc.server [req-74438713-f718-4c8f-a274-d792f17bea09 
e9cec66166ea406eac19b3aa528f5669 ffab07a846cd4ca78845ff7c51b31f46 - default 
default] Exception during message handling: NotImplementedError
  Traceback (most recent call last):
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", 
line 165, in _process_incoming
  res = self.dispatcher.dispatch(message)
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 276, in dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 196, in _do_dispatch
  result = func(ctxt, **new_args)
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/exception_wrapper.py", 
line 79, in wrapped
  function_name, call_dict, binary, tb)
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
  self.force_reraise()
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
  raise value
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/exception_wrapper.py", 
line 69, in wrapped
  return f(self, context, *args, **kw)
File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 
6245, in get_host_uptime
  return self.driver.get_host_uptime()
File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/driver.py", 
line 1460, in get_host_uptime
  raise NotImplementedError()
  NotImplementedError

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1910504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910533] [NEW] [RFE] New dhcp agents scheduler - use all agents for each network

2021-01-07 Thread Slawek Kaplonski
Public bug reported:

Sometimes in multistack spine&leaf topologies, when dhcp agents are run
on compute nodes there is a need to assign each network to be hosted by
all dhcp agents. Otherwise vms spawn on some of the sites may not get
properly configured IP addresses.

Of course this can be done by setting correct number in
"dhcp_agents_per_network" config option but this solution may not scale
well as it needs to be updated everytime when new nodes/sites are added.

So maybe better option would be to propose new dhcp scheduler class which can 
be used in such cases.
This new scheduler would simply always schedule all networks to the all dhcp 
agents which it will find in the deployment.

** Affects: neutron
 Importance: Wishlist
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: l3-ipam-dhcp rfe-triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1910533

Title:
  [RFE] New dhcp agents scheduler - use all agents for each network

Status in neutron:
  New

Bug description:
  Sometimes in multistack spine&leaf topologies, when dhcp agents are
  run on compute nodes there is a need to assign each network to be
  hosted by all dhcp agents. Otherwise vms spawn on some of the sites
  may not get properly configured IP addresses.

  Of course this can be done by setting correct number in
  "dhcp_agents_per_network" config option but this solution may not
  scale well as it needs to be updated everytime when new nodes/sites
  are added.

  So maybe better option would be to propose new dhcp scheduler class which can 
be used in such cases.
  This new scheduler would simply always schedule all networks to the all dhcp 
agents which it will find in the deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1910533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910341] Re: BTP threshold didn't update

2021-01-07 Thread Balazs Gibizer
Sorry this bug is not about OpenStack Nova. Marking as Invalid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1910341

Title:
  BTP threshold didn't update

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  sku: SP3-DVT2-C1,

  As AC is plugged, there are endless BAT0 udev event per check
  After check the finding is the BTP threshold didn't update.

  sudo udevadm monitor

  The output samples is like below:

  KERNEL[138.141119] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
  UDEV [138.196176] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
  KERNEL[139.620136] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
  UDEV [139.677495] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
  KERNEL[141.075309] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
  UDEV [141.131830] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
  KERNEL[142.541150] change 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)

  ACPI SPEC

  10.2.2.8  _BTP (Battery Trip Point)
  This object is used to set a trip point to generate an SCI whenever the 
Battery Remaining Capacity 
  reaches or crosses the value specified in the _BTP object. Specifically, if 
Battery Remaining Capacity is 
  less than the last argument passed to _BTP, a notification must be issued 
when the value of Battery 
  Remaining Capacity rises to be greater than or equal to this trip-point 
value. Similarly, if Battery 
  Remaining Capacity is greater than the last argument passed to _BTP, a 
notification must be issued when 
  the value of Battery Remaining Capacity falls to be less than or equal to 
this trip-point value. The last 
  argument passed to _BTP will be kept by the system.
  If the battery does not support this function, the _BTP control method is not 
located in the namespace. In 
  this case, the OS must poll the Battery Remaining Capacity value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1910341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910331] Re: Live migration CPU pre check failed

2021-01-07 Thread Balazs Gibizer
*** This bug is a duplicate of bug 1898715 ***
https://bugs.launchpad.net/bugs/1898715

It seems like a duplicate of
https://bugs.launchpad.net/nova/+bug/1898715 for which the patch merged
about a months ago https://review.opendev.org/c/openstack/nova/+/758761

I'm marking this as duplicate. Please remove the duplication tag if you
think this is a separate issue.

** This bug has been marked a duplicate of bug 1898715
   Live migration fails despite matching CPUs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1910331

Title:
  Live migration CPU pre check failed

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  During live-migration with --block-migrate enable, CPU check always fail even 
both source host and destination host are identical in term of hardware and 
software version. 

  Steps to reproduce
  ==
  # nova live-migration --block-migrate instance computenode2

  After a bit, check on computenode2, I can see the error in the log
  below

  Expected result
  ===
  instance got moved to computenode2

  Actual result
  =
  instance is still on its old compute node

  Environment
  ===
  1. Openstack version, OS version and libvirt version
  #  rpm -qa | egrep "nova|kvm|libvirt" | sort
  libvirt-bash-completion-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-client-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-interface-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-network-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-nodedev-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-nwfilter-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-qemu-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-secret-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-storage-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-storage-core-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-storage-disk-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  
libvirt-daemon-driver-storage-gluster-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  
libvirt-daemon-driver-storage-iscsi-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  
libvirt-daemon-driver-storage-iscsi-direct-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  
libvirt-daemon-driver-storage-logical-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  
libvirt-daemon-driver-storage-mpath-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-storage-rbd-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-driver-storage-scsi-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-daemon-kvm-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  libvirt-libs-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64
  openstack-nova-common-20.4.1-1.el8.noarch
  openstack-nova-compute-20.4.1-1.el8.noarch
  python3-libvirt-6.0.0-1.module_el8.3.0+555+a55c8938.x86_64
  python3-nova-20.4.1-1.el8.noarch
  python3-novaclient-15.1.1-1.el8.noarch
  qemu-kvm-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-block-curl-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-block-gluster-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-block-iscsi-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-block-rbd-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-block-ssh-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-common-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64
  qemu-kvm-core-4.2.0-34.module_el8.3.0+555+a55c8938.x86_64

  # cat /etc/redhat-release
  CentOS Linux release 8.3.2011

  # uname -a
  Linux computenode2 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Thu Nov 19 17:20:08 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux

  
  2. Which hypervisor did you use?
  KVM

  3. Which networking type did you use?
  Neutron with OpenVSwitch

  
  Logs & Configs
  ==

  - On computende2 I can see this error:

  2021-01-06 10:57:45.795 6992 INFO nova.virt.libvirt.driver 
[req-34f4dc34-dcb1-4632-813b-caf9f1a47439 7584ec4fa0fc45ddb25c2603357a7912 
b5608fc845894e68abe34703c801b6e6 - default default] Instance launched has CPU 
info
  : {"arch": "x86_64", "model": "Cascadelake-Server-noTSX", "vendor": "Intel", 
"topology": {"cells": 2, "sockets": 1, "cores": 22, "threads": 2}, "features": 
["lm", "pdpe1gb", "arch-capabilities", "smx", "lahf_lm", "rt
  m", "avx", "fma", "est", "ss", "vmx", "avx512dq", "fsgsbase", "ibrs-all", 
"rdtscp", "taa-no", "mmx", "pclmuldq", "xtpr", "clflushopt", "spec-ctrl", 
"mca", "dtes64", "mds-no", "apic", "pcid", "fpu", "monitor", "cx16",
   "stibp", "avx512f", "adx", "md-clear", "tm2", "pse", "hle", "invpcid", 
"sse2", "xsaveopt", "xsavec", "cx8", "popcnt", "pse36", "tsc-deadline", 
"avx512-bf16", "ht", "tm", "arat", "avx

[Yahoo-eng-team] [Bug 1842988] Re: OVN deployment with DVR environment incorrectly routes FIP traffic through Controller/Chassis-GW

2021-01-07 Thread Marios Andreou
clearing out old bugs. no update here in a while so I am going to move
it to fix-released for tripleo too please move it back if you disagree
thanks

** Changed in: tripleo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1842988

Title:
  OVN deployment with DVR environment incorrectly routes FIP traffic
  through Controller/Chassis-GW

Status in neutron:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  TripleO Stein.  OVN deployment with DVR environment file incorrectly
  routes FIP traffic through Controller/Chassis-GW rather than locally.

  Steps to reproduce
  ===
  1. Deployed overcloud enabling ovn with DVR.

  The following neutron environment files were used (in additional to
  network isolation using bonded VLAN and other customizations)

   -e $TD/environments/services/neutron-ovn-dvr-ha.yaml \
   -e $TD/environments/services/neutron-ovn-dpdk.yaml \
   -e $TD/environments/services/neutron-ovn-sriov.yaml

  2. After overcloud deployment confirmed that the neutron conf files
  and chassis settings are correct.

  neutron.conf   -> enable_dvr=True
  ml2_conf.ini   -> enable_distributed_floating_ip=True
  bridge_mapping on compute chassis  -> ovn-bridge-mappings="datacentre:br-ex"

  3. Deployed instance with Geneve Tenant network with floating IP on
  VLAN external ‘datacentre’ network.

  Expected Result
  =
  FIP traffic is routed through the same compute node as instance via a local 
NAT rule.

  Actual Result
  
  FIP is operational but traffic routed through the Controller/Chassis-GW.

  The matching NAT entry for the FIP shows that the external_mac is Null
  and logical port was not set, so there is no local NAT routing
  occurring as observed.

  
  Environment
  ===

  1.  Tripleo Stein using the latest current-tripleo-rdo container
  images with standard Compute role plus OvsDpdk and SR-IOV roles.

  2.Ceph and Pure Storage
  3.OVN networking (default in Stein) with the following neutron environment

-e $TD/environments/services/neutron-ovn-dvr-ha.yaml \
-e $TD/environments/services/neutron-ovn-dpdk.yaml \
-e $TD/environments/services/neutron-ovn-sriov.yaml

  (in additional to network isolation using bonded VLAN and other
  customizations)

  Confirmed that after deployment

  • neutron.conf   -> enable_dvr=True
  • ml2_conf.ini   -> enable_distributed_floating_ip=True
  • bridge_mapping on compute chassis  -> 
ovn-bridge-mappings="datacentre:br-ex"

  
  Logs & Configs
  ===

  neutron.conf   -> enable_dvr=True
  ml2_conf.ini   -> enable_distributed_floating_ip=True
  bridge_mapping on compute chassis  -> ovn-bridge-mappings="datacentre:br-ex"

  ovn-nbctl lr-nat-list neutron-a53687de-ac06-400a-9104-748d2807c55a

  TYPE EXTERNAL_IPLOGICAL_IPEXTERNAL_MAC
 LOGICAL_PORT
  dnat_and_snat10.3.27.20 192.168.0.18
  snat 10.3.25.207192.168.0.0/24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1842988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1906768] Re: Evacuation results in multipath residue when use fc

2021-01-07 Thread Lee Yarwood
** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1906768

Title:
  Evacuation results in multipath residue when use fc

Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Won't Fix

Bug description:
  My environment uses the OpenStack Pike,fibre Channel is used for back-end 
storage,when we place 'volume_use_multipath=True' in nova.conf, we found that 
evacuation leads to multipath residue。
    Trace code through Debug,we find os-brick can not find volume_paths, So the 
residual multipath cannot be removed。
     By analyzing the code,the Volume_path acquired through FC is not in a 
local directory, because it's on the new node.
  Repeat steps:
  1.configure cinder backend storage as fc san and multipath;
  2.boot vm from cinder volume;
  3.evacuate vm;
  4.execute commands at the old node to see the multipath,and you can see that 
vm multipath residue;

  If the multipath residue will cause an error during vm migration,eg:
  commandexecutionfailed: failed to execute command emultipath -l
  /dev/sdbx .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1906768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910623] [NEW] neutron api worker process can not desc like "neutron-server: api wokers"

2021-01-07 Thread ZhouHeng
Public bug reported:

According to the description of the
patch(https://review.opendev.org/c/openstack/neutron/+/637019), every
neutron-server process should names to their role, like:

25355 ?Ss 0:26 /usr/bin/python /usr/local/bin/neutron-server \
  --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
25368 ?S  0:00 neutron-server: api worker
25369 ?S  0:00 neutron-server: api worker
25370 ?S  0:00 neutron-server: api worker
25371 ?S  0:00 neutron-server: api worker
25372 ?S  0:02 neutron-server: rpc worker
25373 ?S  0:02 neutron-server: rpc worker
25374 ?S  0:02 neutron-server: services worker

but my devstack environment is:
root@ubuntu:~# ps aux|grep neutron-server
stack 615922  0.0  0.3 160508 71640 ?Ss2020   4:32 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
stack 615940  0.2  0.6 268616 126284 ?   Sl2020  53:14 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
stack 615941  0.2  0.5 262176 119792 ?   Sl2020  45:56 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
stack 615942  0.5  0.5 408828 114948 ?   Sl2020 124:27 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)
stack 615943  0.2  0.5 262628 106540 ?   Sl2020  63:14 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)

rpc workers process can display the name normally, api workers process
does not display the process name.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1910623

Title:
  neutron api worker process can not desc like "neutron-server: api
  wokers"

Status in neutron:
  New

Bug description:
  According to the description of the
  patch(https://review.opendev.org/c/openstack/neutron/+/637019), every
  neutron-server process should names to their role, like:

  25355 ?Ss 0:26 /usr/bin/python /usr/local/bin/neutron-server \
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  25368 ?S  0:00 neutron-server: api worker
  25369 ?S  0:00 neutron-server: api worker
  25370 ?S  0:00 neutron-server: api worker
  25371 ?S  0:00 neutron-server: api worker
  25372 ?S  0:02 neutron-server: rpc worker
  25373 ?S  0:02 neutron-server: rpc worker
  25374 ?S  0:02 neutron-server: services worker

  but my devstack environment is:
  root@ubuntu:~# ps aux|grep neutron-server
  stack 615922  0.0  0.3 160508 71640 ?Ss2020   4:32 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  stack 615940  0.2  0.6 268616 126284 ?   Sl2020  53:14 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  stack 615941  0.2  0.5 262176 119792 ?   Sl2020  45:56 
/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  stack 615942  0.5  0.5 408828 114948 ?   Sl2020 124:27 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)
  stack 615943  0.2  0.5 262628 106540 ?   Sl2020  63:14 
neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server 
--config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini)

  rpc workers process can display the name normally, api workers process
  does not display the process name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1910623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910219] Re: Error: Unable to retrieve Ironic nodes. 'project_id'

2021-01-07 Thread Akihiro Motoki
This is not a horizon bug. This is a bug in ironic-ui.
Could you file it for ironic-ui bug tracker 
https://storyboard.openstack.org/#!/project/952?

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1910219

Title:
  Error: Unable to retrieve Ironic nodes.  'project_id'

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I create OpenStack server from kolla ansible.

  I Get this error from horizon.

  
  ```
  Traceback (most recent call last):
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/openstack_dashboard/api/rest/utils.py",
 line 128, in _wrapped 
   data = function(self, request, *args, **kw)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ironic_ui/api/ironic_rest_api.py",
 line 41, in get 
   nodes = ironic.node_list(request)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ironic_ui/api/ironic.py", line 
62, in node_list 
   node_manager = ironicclient(request).node
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/horizon/utils/memoized.py", 
line 119, in wrapped 
   value = func(*args, **kwargs)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ironic_ui/api/ironic.py", line 
51, in ironicclient 
   cacert=cacert)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ironicclient/client.py", line 
68, in get_client 
   **kwargs)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/openstack/config/__init__.py", 
line 36, in get_cloud_region 
   return config.get_one(options=parsed_options, **kwargs)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/openstack/config/loader.py", 
line 1101, in get_one 
   auth_plugin = loader.load_from_options(**config['auth'])
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/keystoneauth1/loading/base.py",
 line 164, in load_from_options 
   return self.create_plugin(**kwargs)
 File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/keystoneauth1/loading/base.py",
 line 125, in create_plugin
   return self.plugin_class(**kwargs)
  TypeError: __init__() got an unexpected keyword argument 'project_id'
   
  Internal Server Error: /api/ironic/nodes/

  ```

  I don't get any errors from ironic. The cluster is new, i didn't
  import anything or create any bare metal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1910219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp