[Yahoo-eng-team] [Bug 1850249] [NEW] create a user use first password change but uneffectted

2019-10-28 Thread kuangpeiling
Public bug reported:

i set change_password_upon_first_use config true,but when i validate a user 
after i first create,it can auth success instead of raise PasswordExpired
i notice that the code to set password expire is before set resource_options to 
user,surely it can not get the new user's options, is this a bug?

user_ref = model.User.from_dict(user)
if self._change_password_required(user_ref):
user_ref.password_ref.expires_at = datetime.datetime.utcnow()
user_ref.created_at = datetime.datetime.utcnow()
session.add(user_ref)
# Set resource options passed on creation
resource_options.resource_options_ref_to_mapper(
user_ref, model.UserOption)
return base.filter_user(user_ref.to_dict())

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1850249

Title:
  create a user use first password change but uneffectted

Status in OpenStack Identity (keystone):
  New

Bug description:
  i set change_password_upon_first_use config true,but when i validate a user 
after i first create,it can auth success instead of raise PasswordExpired
  i notice that the code to set password expire is before set resource_options 
to user,surely it can not get the new user's options, is this a bug?

  user_ref = model.User.from_dict(user)
  if self._change_password_required(user_ref):
  user_ref.password_ref.expires_at = datetime.datetime.utcnow()
  user_ref.created_at = datetime.datetime.utcnow()
  session.add(user_ref)
  # Set resource options passed on creation
  resource_options.resource_options_ref_to_mapper(
  user_ref, model.UserOption)
  return base.filter_user(user_ref.to_dict())

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1850249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849502] Re: [DHCP] Check the dnsmasq process status after enabling the process

2019-10-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/690700
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7c5ce50a0c9e09d2729ae1ce79d2623ccafca9ee
Submitter: Zuul
Branch:master

commit 7c5ce50a0c9e09d2729ae1ce79d2623ccafca9ee
Author: Rodolfo Alonso Hernandez 
Date:   Wed Oct 23 14:47:54 2019 +

Check dnsmasq process is active when spawned

After spawning the "dnsmasq" process in the method
"Dnsmasq._spawn_or_reload_process", we need to check that the "dnsmasq"
process is running and could be detected by the ProcessManager instance
controlling it.

ProcessManager determines if a process is "active":
- If the network ID is in the cmdline used to execute the process.
- If the process is detected by psutil.Process(pid), returning the
  cmdline needed in the first condition.
- If the PID file exists; this is written by the dnsmasq process
  once is started and is needed in the second condition.

To make this feature available for any other process using
ProcessManager, the implementation is done in this class.

Change-Id: I51dc9d342c613afcbcfdc50a1d2811502748f170
Closes-Bug: #1849502


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1849502

Title:
  [DHCP] Check the dnsmasq process status after enabling the process

Status in neutron:
  Fix Released

Bug description:
  Hello:

  Version: Train (but also reproducible in master)
  Environment: 1 network with 3 subnets, and one host for the DHCP agent.

  We found a race condition between the DHCP agent driver call and the
  dnsmasq process status.

  When a deployment process starts, a network is created with three subnets. 
The subnet creation timestamp is:
  ```
  server.log.3:1828:2019-10-21 22:29:44.648 24 DEBUG 
neutron_lib.callbacks.manager [req-91242e64-648f-45e8-a5eb-de18c9301ea0 
ae0c1e62ea1c4a14952ac2a64ef54690 10198262da2c44f1a9ec9e63084111d2 - default 
default] Notify callbacks 
['neutron.services.segments.plugin.SegmentHostRoutes.host_routes_before_create-930335']
 for subnet, before_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
  server.log.3:1875:2019-10-21 22:29:45.929 24 DEBUG 
neutron_lib.callbacks.manager [req-27ed89df-cbb0-4a3e-90db-208401560889 
ae0c1e62ea1c4a14952ac2a64ef54690 10198262da2c44f1a9ec9e63084111d2 - default 
default] Notify callbacks 
['neutron.services.segments.plugin.SegmentHostRoutes.host_routes_before_create-930335']
 for subnet, before_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
  server.log.3:1974:2019-10-21 22:29:47.846 24 DEBUG 
neutron_lib.callbacks.manager [req-113792ac-9649-443e-9ee3-102ea02f5bd5 
ae0c1e62ea1c4a14952ac2a64ef54690 10198262da2c44f1a9ec9e63084111d2 - default 
default] Notify callbacks 
['neutron.services.segments.plugin.SegmentHostRoutes.host_routes_before_create-930335']
 for subnet, before_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
  ```

  As we can see in [1], the DHCP is informed about the different changes in the 
network and reacts depending on the current status. The first time, the DHCP 
process is enabled (no other process is running at this point). This happens at 
22:29:45.340, between first subnet and second subnet creation. The network 
information at this point has only one subnet, retrieved by the DHCP agent in:
  ```
  2019-10-21 22:29:45.152 27 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-535e0231-b69e-4136-8ce4-aad14c6fc9ac - - - - -] Network 
8c9e6c68-86ef-4bb0-b3fa-9a36a71d0ccb requested from 
site-undercloud-0.localdomain get_network_info 
/usr/lib/python3.6/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py:200
  ```

  The next driver call is a "restart". This happens when 2nd and 3rd subnets 
are created. When the DHCP agent calls the server to retrieve the network 
information, this network has all three subnets:
  ```
  2019-10-21 22:29:49.135 26 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-29554255-1adb-4ec7-8655-39581e7fa72f - - - - -] Network 
8c9e6c68-86ef-4bb0-b3fa-9a36a71d0ccb requested from 
site-undercloud-0.localdomain get_network_info 
/usr/lib/python3.6/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py:200
  ```

  What is happening in the "restart" process is a bit weird [2]. The process 
should be stopped first and then restarted with the new cmd line (including the 
new subnet dhcp-ranges --> the main problem detected in this bug). But the 
dnsmasq process is not running yet:
  ```
  dhcp-agent.log.2:442:2019-10-21 22:29:49.417 57623 DEBUG 
neutron.agent.linux.external_process [req-29554255-1adb-4ec7-8655-39581e7fa72f 
- - - - -] No dnsmasq process started for 8c9e6c68-86ef-4bb0-b3fa-9a36a71d0ccb 
disable 
/usr/lib/python3.6/site-packages/neutron/agent/linux/

[Yahoo-eng-team] [Bug 1850240] [NEW] _netdev mount prevented by late cloud-init-local startup

2019-10-28 Thread Bobby B
Public bug reported:

_netdev CIFS systemd automounts are sometimes prevented because such
mounts come too quickly after cloud-init reports the network settings to
the container.

Only thing that seems to prevent most of the time is to have cloud-init-
local startup as early as possible --

1. Change "Wants" to "RequiredBy" for network-pre.target in cloud-init-
local.service.

2. Add "Requires=...basic.target" to remote-fs.target

EXPECTED BEHAVIOR

_netdev mounts will automount every time at startup.

ACTUAL BEHAVIOR

*sometimes* _netdev mounts will fail at startup if the networking is not
ready.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1850240

Title:
  _netdev mount prevented by late cloud-init-local startup

Status in cloud-init:
  New

Bug description:
  _netdev CIFS systemd automounts are sometimes prevented because such
  mounts come too quickly after cloud-init reports the network settings
  to the container.

  Only thing that seems to prevent most of the time is to have cloud-
  init-local startup as early as possible --

  1. Change "Wants" to "RequiredBy" for network-pre.target in cloud-
  init-local.service.

  2. Add "Requires=...basic.target" to remote-fs.target

  EXPECTED BEHAVIOR

  _netdev mounts will automount every time at startup.

  ACTUAL BEHAVIOR

  *sometimes* _netdev mounts will fail at startup if the networking is
  not ready.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1850240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1850199] [NEW] horizon french translation for suspend and shelve are the same

2019-10-28 Thread Marc GariƩpy
Public bug reported:

the translation to suspend or shelve instance in horizon are the same in
french.

https://translate.openstack.org/webtrans/translate?project=horizon&iteration=master&localeId=fr&locale
=en-
CA#view:doc;doc:openstack_dashboard/locale/django;search:suspend;textflow:379129

https://translate.openstack.org/webtrans/translate?project=horizon&iteration=master&localeId=fr&locale
=en-
CA#view:doc;doc:openstack_dashboard/locale/django;search:suspend;textflow:379131


Thanks

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1850199

Title:
  horizon french translation for suspend and shelve are the same

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  the translation to suspend or shelve instance in horizon are the same
  in french.

  
https://translate.openstack.org/webtrans/translate?project=horizon&iteration=master&localeId=fr&locale
  =en-
  
CA#view:doc;doc:openstack_dashboard/locale/django;search:suspend;textflow:379129

  
https://translate.openstack.org/webtrans/translate?project=horizon&iteration=master&localeId=fr&locale
  =en-
  
CA#view:doc;doc:openstack_dashboard/locale/django;search:suspend;textflow:379131

  
  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1850199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838449] Re: Router migrations failing in the gate

2019-10-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/691498
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=48ea7da6c52ee14f0e9cc244fbc834283a8e74a7
Submitter: Zuul
Branch:master

commit 48ea7da6c52ee14f0e9cc244fbc834283a8e74a7
Author: Miguel Lavalle 
Date:   Sat Oct 26 18:43:56 2019 -0500

Router synch shouldn't return unrelated routers

[0] introduced the concept of connected routers: routers that are
connected to the same subnets. When a L3 agent is synching a router
with connected routers, the data of the entire set should be returned
to the agent by the Neutron server.

However, if an agent tries to synch a router with
no connected routers when the same agent has other routers that are
connected among them, the Neutron server returns the former and the
latter. For details of how this bug can manifest itself, please see [1].

This change prevents this situation: only the synched router is
returned.

[0] https://review.opendev.org/#/c/597567
[1] https://bugs.launchpad.net/neutron/+bug/1838449/comments/15

Change-Id: Ibbf35d0f4a0bf9281f0bc8c411e8527eed75361d
Closes-Bug: #1838449


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838449

Title:
  Router migrations failing in the gate

Status in neutron:
  Fix Released

Bug description:
  As of the reporting of this bug, router migrations are the largest
  contributor of failures of the neutron-tempest-plugin-dvr-multinode-
  scenario job. Over a 7 days period these are the failures observed:

  test_qos_basic_and_update 48
  test_from_dvr_to_dvr_ha   39
  test_from_dvr_to_ha   38
  test_from_dvr_to_legacy   23
  test_connectivity_through_2_routers   17
  test_snat_external_ip 17
  test_vm_reachable_through_compute 10
  test_trunk_subport_lifecycle  9
  test_qos  8

  Specifically, the migrations from dvr to something else seem to be the
  problem

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1850153] [NEW] nova resize operation doesn't support disk resize for emphemeral disk and swap disk

2019-10-28 Thread sunlei
Public bug reported:

nova resize operation doesn't support disk resize for emphemeral disk
and swap disk. After 'nova resize' operation, both disk size remain
unchanged.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1850153

Title:
  nova resize operation doesn't support disk resize for emphemeral disk
  and swap disk

Status in OpenStack Compute (nova):
  New

Bug description:
  nova resize operation doesn't support disk resize for emphemeral disk
  and swap disk. After 'nova resize' operation, both disk size remain
  unchanged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1850153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656276] Re: Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2019-10-28 Thread Sagi (Sergey) Shnaidman
** Changed in: packstack
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  Invalid
Status in Packstack:
  Fix Released
Status in puppet-nova:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1850137] [NEW] Hosts in a VPNaaS-VPNaas VPN lose their interconnect.

2019-10-28 Thread Dmytro Kostinov
Public bug reported:

When i building an IPSec tunnel between two projects (VPNaaS-VPNaaS) everything 
works fine. But after an random period of time (from 20 minutes to a week), the 
connection between the end hosts in the opposite local networks disappears.
Ping from the end host to the gateways of both local networks passes.

For example. There is the following topology:
host-loc-1(10.9.9.2/24) - (10.9.9.1/24)VPNaaS1 - VPNaaS2(192.168.10.1/24) - 
host-loc-2(192.168.10.8/24)

When a problem occurs, the address 10.9.9.2 stops pinging 192.168.10.8,
but continues to ping 192.168.10.1.

VPN connection status is active and the cause of the problem is the loss
of iptables rules in the FORWARD chain for the project namespace.

Normal condition:
"""
ip netns exec qrouter-ID iptables -L -n | grep -A 5 "Chain FORWARD"
Chain FORWARD (policy ACCEPT)
target prot opt source   destination 
ACCEPT all  --  192.168.10.0/24 10.9.9.0/24  policy match dir 
in pol ipsec reqid 1 proto 50
ACCEPT all  --  10.9.9.0/24  192.168.10.0/24 policy match dir 
out pol ipsec reqid 1 proto 50
neutron-filter-top  all  --  0.0.0.0/00.0.0.0/0   
neutron-l3-agent-FORWARD  all  --  0.0.0.0/00.0.0.0/0
"""

Problem state:
"""
ip netns exec qrouter-ID iptables -L -n | grep -A 5 "Chain FORWARD"
Chain FORWARD (policy ACCEPT)
target prot opt source   destination 
neutron-filter-top  all  --  0.0.0.0/00.0.0.0/0   
neutron-l3-agent-FORWARD  all  --  0.0.0.0/00.0.0.0/0
"""


How can I understand why the FORWARD rule disappears?


Installed software version:

dpkg -l | grep neutron
ii  neutron-common2:12.0.6-0ubuntu3~cloud0  
 all  Neutron is a virtual network service for Openstack - common
ii  neutron-dhcp-agent2:12.0.6-0ubuntu3~cloud0  
 all  Neutron is a virtual network service for Openstack - DHCP 
agent
ii  neutron-l3-agent  2:12.0.6-0ubuntu3~cloud0  
 all  Neutron is a virtual network service for Openstack - l3 agent
ii  neutron-metadata-agent2:12.0.6-0ubuntu3~cloud0  
 all  Neutron is a virtual network service for Openstack - metadata 
agent
ii  neutron-openvswitch-agent 2:12.0.6-0ubuntu3~cloud0  
 all  Neutron is a virtual network service for Openstack - Open 
vSwitch plugin agent
ii  python-neutron2:12.0.6-0ubuntu3~cloud0  
 all  Neutron is a virtual network service for Openstack - Python 
library
ii  python-neutron-fwaas  1:12.0.1-0ubuntu1~cloud0  
 all  Firewall-as-a-Service driver for OpenStack Neutron
ii  python-neutron-lib1.13.0-0ubuntu1~cloud0
 all  Neutron shared routines and utilities - Python 2.7
ii  python-neutron-vpnaas 2:12.0.1-0ubuntu1~cloud0  
 all  VPN-as-a-Service driver for OpenStack Neutron
ii  python-neutronclient  1:6.7.0-0ubuntu1~cloud0   
 all  client API library for Neutron - Python 2.7

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron queens vpn vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1850137

Title:
  Hosts in a VPNaaS-VPNaas VPN lose their interconnect.

Status in neutron:
  New

Bug description:
  When i building an IPSec tunnel between two projects (VPNaaS-VPNaaS) 
everything works fine. But after an random period of time (from 20 minutes to a 
week), the connection between the end hosts in the opposite local networks 
disappears.
  Ping from the end host to the gateways of both local networks passes.

  For example. There is the following topology:
  host-loc-1(10.9.9.2/24) - (10.9.9.1/24)VPNaaS1 - VPNaaS2(192.168.10.1/24) - 
host-loc-2(192.168.10.8/24)

  When a problem occurs, the address 10.9.9.2 stops pinging
  192.168.10.8, but continues to ping 192.168.10.1.

  VPN connection status is active and the cause of the problem is the
  loss of iptables rules in the FORWARD chain for the project namespace.

  Normal condition:
  """
  ip netns exec qrouter-ID iptables -L -n | grep -A 5 "Chain FORWARD"
  Chain FORWARD (policy ACCEPT)
  target prot opt source   destination 
  ACCEPT all  --  192.168.10.0/24 10.9.9.0/24  policy match dir 
in pol ipsec reqid 1 proto 50
  ACCEPT all  --  10.9.9.0/24  192.168.10.0/24 policy match dir 
out pol ipsec reqid 1 proto 50
  neutron-filter-top  all  --  0.0.0.0/00.0.0.0/0   
  neutron-l3-agent-FORWARD  all  --  0.0.0.0/00.0.0.0/0
  """

  Problem state:
  """
  ip netns exec

[Yahoo-eng-team] [Bug 1787910] Re: OVB overcloud deploy fails on nova placement errors

2019-10-28 Thread Sagi (Sergey) Shnaidman
** Changed in: nova/rocky
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1787910

Title:
  OVB overcloud deploy fails on nova placement errors

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  https://logs.rdoproject.org/openstack-periodic/git.openstack.org
  /openstack-infra/tripleo-ci/master/legacy-periodic-tripleo-ci-centos-7
  -ovb-3ctlr_1comp-
  
featureset001-master/1544941/logs/undercloud/var/log/extra/errors.txt.gz#_2018-08-20_01_49_09_830

  https://logs.rdoproject.org/openstack-periodic/git.openstack.org
  /openstack-infra/tripleo-ci/master/legacy-periodic-tripleo-ci-centos-7
  -ovb-3ctlr_1comp-
  
featureset001-master/1544941/logs/undercloud/var/log/extra/docker/containers/nova_placement/log/nova
  /nova-compute.log.txt.gz?level=ERROR#_2018-08-20_01_49_09_830

  ERROR nova.scheduler.client.report
  [req-a8752223-5d75-4fa2-9668-7c024d166f09 - - - - -] [req-
  561538c7-b837-448b-b25e-38a3505ab2e5] Failed to update inventory to
  [{u'CUSTOM_BAREMETAL': {'allocation_ratio': 1.0, 'total': 1,
  'reserved': 1, 'step_size': 1, 'min_unit': 1, 'max_unit': 1}}] for
  resource provider with UUID 3ee26a05-944b-42ba-b74d-42aa2fda5d73.  Got
  400: {"errors": [{"status": 400, "request_id": "req-561538c7-b837
  -448b-b25e-38a3505ab2e5", "detail": "The server could not comply with
  the request since it is either malformed or otherwise incorrect.\n\n
  Unable to update inventory for resource provider 3ee26a05-944b-42ba-
  b74d-42aa2fda5d73: Invalid inventory for 'CUSTOM_BAREMETAL' on
  resource provider '3ee26a05-944b-42ba-b74d-42aa2fda5d73'. The reserved
  value is greater than or equal to total.  ", "title": "Bad Request"}]}

  ERROR nova.compute.manager [req-a8752223-5d75-4fa2-9668-7c024d166f09 -
  - - - -] Error updating resources for node 3ee26a05-944b-42ba-b74d-
  42aa2fda5d73.: ResourceProviderSyncFailed: Failed to synchronize the
  placement service with resource provider information supplied by the
  compute host.

  Traceback (most recent call last):
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722, in 
_update_available_resource_for_node
  botkaERROR nova.compute.manager rt.update_available_resource(context, 
nodename)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 703, 
in update_available_resource
  botkaERROR nova.compute.manager self._update_available_resource(context, 
resources)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  botkaERROR nova.compute.manager return f(*args, **kwargs)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 726, 
in _update_available_resource
  botkaERROR nova.compute.manager self._init_compute_node(context, 
resources)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 593, 
in _init_compute_node
  botkaERROR nova.compute.manager self._update(context, cn)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f
  botkaERROR nova.compute.manager return Retrying(*dargs, **dkw).call(f, 
*args, **kw)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 223, in call
  botkaERROR nova.compute.manager return attempt.get(self._wrap_exception)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
  botkaERROR nova.compute.manager six.reraise(self.value[0], self.value[1], 
self.value[2])
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
  botkaERROR nova.compute.manager attempt = Attempt(fn(*args, **kwargs), 
attempt_number, False)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 938, 
in _update
  botkaERROR nova.compute.manager self._update_to_placement(context, 
compute_node)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 907, 
in _update_to_placement
  botkaERROR nova.compute.manager 
reportclient.update_from_provider_tree(context, prov_tree)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
  botkaERROR nova.compute.manager return getattr(self.instance, 
__name)(*args, **kwargs)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/repor