[Yahoo-eng-team] [Bug 1750829] Re: RFE: libvirt: Add ability to configure extra CPU flags for named CPU models

2018-07-06 Thread Frode Nordahl
** Also affects: charm-nova-compute
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750829

Title:
  RFE: libvirt: Add ability to configure extra CPU flags for named CPU
  models

Status in OpenStack nova-compute charm:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Compute (nova) queens series:
  Fix Committed

Bug description:
  Motivation
  --

  The recent "Meltdown" CVE fixes resulted in critical performance
  penalty, From here[*]:

  [...] However, in examining both the various fixes rolled out in
  actual Linux distros over the past few days and doing some very
  informal surveying of environments I have access to, I discovered
  that the PCID ["process-context identifiers"] processor feature,
  which used to be a virtual no-op, is now a performance AND security
  critical item.[...]

  So if a Nova user has applied all the "Meltdown" CVE fixes, and is using
  a named CPU model (like "IvyBridge", or "Westmere" — which specifically
  lack the said obscure "PCID" feature) they will incur severe performance
  degradation[*].

  Note that some of Intel *physical* CPUs themselves include the 'pcid'
  CPU feature flag; but the named CPU models provided by libvirt & QEMU
  lack that flag — hence we explicitly specify it for virtual CPUs via the
  following proposed config attribute.

  [*] https://groups.google.com/forum/m/#!topic/mechanical-
  sympathy/L9mHTbeQLNU

  Proposed change
  ---

  Modify Nova's libvirt driver such that it will be possible to set
  granular CPU feature flags for named CPU models.  E.g. to explicitly
  specify the 'pcid' feature flag with Intel IvyBridge CPU model, set the
  following in /etc/nova.conf:

  ...
  [libvirt]
  cpu_model=IvyBridge
  cpu_model_extra_flags="pcid"
  ...

  The list of known CPU feature flags ('vmx', 'xtpr', 'pcid', et cetera)
  can be found in /usr/share/libvirt/cpu_map.xml.

  Note that before specifying extra CPU feature flags, one should check if
  the named CPU models (provided by libvirt) already include the said
  flags.  E.g. the 'Broadwell', 'Haswell-noTSX' named CPU models provided
  by libvirt already provides the 'pcid' CPU feature flag.

  Other use cases
  ---

    - Nested Virtualization — an operator can specify the Intel 'vmx' or
  AMD 'svm' flags in the level-1 guest (i.e. the guest hypervisor)

    - Ability to use 1GB huge pages with Haswell model as one use case for
  extra flags (thanks: Daniel Berrangé, for mentioning this scenario):

  cpu_model_extra_flags=Haswell
  cpu_model_extra_flags="pdpe1gb"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1750829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780503] [NEW] identity.authenticate CADF initiator id is random

2018-07-06 Thread Gage Hugo
Public bug reported:

When enabling CADF notifications and clearing the notification_opt_out
setting[0] (which cause keystone to be more chatty with notifications)
in order to audit identity.authenticate events, keystone (sometimes)
emits a notification for the identity.authentication event where the
initiator's ID is a random UUID that doesn't match up to a user.

An example of this is shown below, where keystone only has one user
(admin). The config values for enabling CADF notifications were set
here:

DEFAULT:
  notification_format: cadf
  notification_opt_out: ""
oslo_messaging_notifications:
  driver: messagingv2
 

ubuntu@zbook:~$ openstack --os-cloud openstack_helm token issue
++-+
| Field  | Value

   |
++-+
| expires| 2018-07-07T10:55:00+ 

   |
| id | 
gABbP_NE7uqaSEN6dDR4sEDB5N0EvOA085lp82_puZmDxeVV16ulJ_4wCp_FR7suulqGyOf078kXWabvbL8jn45pBS95qRHfJeHDYZtf-mDsjFWm22YaiwqYnSUImz3Y2HsCD9ps_oJgwc2BHQUHHIYCiQeWQ-XmkzEvlc6tqQwflWFhHoM
 |
| project_id | f9e2428b6863443f85bcbb11ac6c300e 

   |
| user_id| 37d3c436d45347529926a4887607d01b 

   |
++-+

ubuntu@zbook:~$ python rabbitmqadmin --host=[redacted] --port=15672
--vhost="keystone" --username=superuser --password=123456 get
queue=notifications.info ackmode=ack_requeue_false | tail -n +4 | head
-n +1

| notifications.info | keystone | 0 | {"oslo.message":
"{\"priority\": \"INFO\", \"_unique_id\":
\"c4180ddc9500419898d6dd89086c1a0a\", \"event_type\":
\"identity.authenticate\", \"timestamp\": \"2018-07-06
22:55:00.205671\", \"publisher_id\": \"identity.keystone-api-
7d5c6cff4-g9dvd\", \"payload\": {\"typeURI\":
\"http://schemas.dmtf.org/cloud/audit/1.0/event\";, \"initiator\":
{\"typeURI\": \"service/security/account/user\", \"host\": {\"agent\":
\"osc-lib/1.10.0 keystoneauth1/3.7.0 python-requests/2.18.4
CPython/2.7.12\", \"address\": \"[redacted]\"}, \"id\":
\"936c1487-eff3-59cc-b424-096cff3cd6e9\"}, \"target\": {\"typeURI\":
\"service/security/account/user\", \"id\": \"932768de-4bf4-5c83-88cc-
11f33f39cba9\"}, \"observer\": {\"typeURI\": \"service/security\",
\"id\": \"9e53891b98b84bb898c0419e16426eca\"}, \"eventType\":
\"activity\", \"eventTime\": \"2018-07-06T22:55:00.205401+\",
\"action\": \"authenticate\", \"outcome\": \"success\", \"id\":
\"bf658c41-24b5-5075-9aee-64e6b3db92cc\"}, \"message_id\":
\"b1026bd5-c0d2-48af-adec-dc44c2e1a46b\"}", "oslo.version": "2.0"} |
1054  | string   | False   |

ubuntu@zbook:~$ openstack --os-cloud openstack_helm user list
+--+---+
| ID   | Name  |
+--+---+
| 37d3c436d45347529926a4887607d01b | admin |
+--+---+

ubuntu@zbook:~$ python rabbitmqadmin --host=[redacted] --port=15672
--vhost="keystone" --username=superuser --password=123456 get
queue=notifications.info ackmode=ack_requeue_false | tail -n +4 | head
-n +1

| notifications.info | keystone | 1 | {"oslo.message":
"{\"priority\": \"INFO\", \"_unique_id\":
\"c0fa7577c07a4de39013f41b33185489\", \"event_type\":
\"identity.authenticate\", \"timestamp\": \"2018-07-06
22:56:45.534129\", \"publisher_id\": \"identity.keystone-api-
7d5c6cff4-g9dvd\", \"payload\": {\"typeURI\":
\"http://schemas.dmtf.org/cloud/audit/1.0/event\";, \"initiator\":
{\"typeURI\": \"service/security/account/user\", \"host\": {\"agent\":
\"osc-lib/1.10.0 keystoneauth1/3.7.0 python-requests/2.18.4
CPython/2.7.12\", \"address\": \"[redacted]\"}, \"id\":
\"129bfaf0-a8e3-579b-9030-0a5917547b46\"}, \"target\": {\"typeURI\":
\"service/security/account/user\", \"id\": \"f67acddd-78df-
58f1-be93-dcb196e44a9e\"}, \"observer\": {\"typeURI\":
\"service/security\", \"id\": \"9e53891b98b84bb898c0419e16426e

[Yahoo-eng-team] [Bug 1757207] Re: compute resource providers not equal to compute nodes in deployment

2018-07-06 Thread iain MacDonnell
Shouldn't the upgrade check simply discount deleted compute_nodes ?

+---+
| Check: Resource Providers |
| Result: Warning   |
| Details: There are 34 compute resource providers and 40 compute nodes |
|   in the deployment. Ideally the number of compute resource   |
|   providers should equal the number of enabled compute nodes  |
|   otherwise the cloud may be underutilized. See   |
|   https://docs.openstack.org/nova/latest/user/placement.html  |
|   for more details.   |
+---+


mysql> select count(*) from compute_nodes;
+--+
| count(*) |
+--+
|   40 |
+--+
1 row in set (0.00 sec)


mysql> select count(*) from compute_nodes where deleted=0;
+--+
| count(*) |
+--+
|   34 |
+--+
1 row in set (0.00 sec)


** Changed in: nova
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1757207

Title:
  compute resource providers not equal to compute nodes in deployment

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Description
  ===
  When I execute the command `nova-status upgrade check`,
  output:

  nova-status upgrade check
  +--+
  | Upgrade Check Results |
  +--+
  | Check: Cells v2 |
  | Result: Success |
  | Details: None |
  +--+
  | Check: Placement API |
  | Result: Success |
  | Details: None |
  +--+
  | Check: Resource Providers |
  | Result: Warning |
  | Details: There are 4 compute resource providers and 15 compute nodes |
  | in the deployment. Ideally the number of compute resource |
  | providers should equal the number of enabled compute nodes |
  | otherwise the cloud may be underutilized. See |
  | http://docs.openstack.org/developer/nova/placement.html |
  | for more details. |
  +--+

  Steps to reproduce
  ==
  How to replicate this?
  Remove the hosts from the openstack controller:
  nova hypervisor-list
  nova service-delete {id}

  Then run:
  su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
  The deleted compute node will be added again as a new node.
  run:
  nova-status upgrade check

  Expected result
  ===
  No warning when you run:
  nova-status upgrade check

  Actual result
  =
  You can find the warning.
  This causes issue with placement of new VM's.
  The compute host which was deleted and added again will not be considered 
during VM scheduling and placement.

  Environment
  ===
  OpenStack Pike release
  Neutron Networking which is default.

  Logs and Configs
  
  Config as the Openstack documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1757207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1678056] Re: RequestSpec records are never deleted when destroying an instance

2018-07-06 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => In Progress

** Changed in: nova/pike
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1678056

Title:
  RequestSpec records are never deleted when destroying an instance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  When an instance is created, Nova adds a record in the API DB
  'request_specs' table by providing the user request. That's used by
  the scheduler for knowing the boot request, also when the instance is
  moved afterwards.

  That said, when destroying the instance, we don't delete the related 
RequestSpec record.
  Of course, operators could run a script for deleting them by checking the 
instance UUIDs, but it would be better if when an instance is hard-deleted (ie. 
when operators don't use [DEFAULT]/reclaim_instance_interval for restoring 
deleted instances), we could then delete the RequestSpec too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1678056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724621] Re: nova-manage cell_v2 verify_instance returns a valid instance mapping even after the instance is deleted/archived

2018-07-06 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Fix Released

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/pike
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/pike
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724621

Title:
  nova-manage cell_v2 verify_instance returns a valid instance mapping
  even after the instance is deleted/archived

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  Although nova-manage cell_v2 verify_instance is used to check if the
  provided instance is correctly mapped to a cell or not, this should
  not be returning a valid mapping message if the instance itself is
  deleted. It should return an error message saying 'The instance does
  not exist'.

  Steps to reproduce :

  1. Create an instance :

  -> nova boot --image 831bb8a0-9305-4cd7-b985-cbdadfb5d3db --flavor m1.nano 
test
  -> nova list
  
+--++++-+-+
  | ID   | Name   | Status | Task State | Power 
State | Networks|
  
+--++++-+-+
  | aec6eb34-6aaf-4883-8285-348d40fdac87 | test   | ACTIVE | -  | 
Running | public=2001:db8::4, 172.24.4.9  |
  
+--++++-+-+

  
  2. Delete the instance :

  -> nova delete test
  Request to delete server test has been accepted.
  -> nova list
  
+--++++-+-+
  | ID   | Name   | Status | Task State | Power 
State | Networks|
  
+--++++-+-+
  
+--++++-+-+

  
  3. Verify Instance :

  -> nova-manage cell_v2 verify_instance --uuid 
aec6eb34-6aaf-4883-8285-348d40fdac87
  Instance aec6eb34-6aaf-4883-8285-348d40fdac87 is in cell: cell5 
(c5ccba5d-1a45-4739-a5dd-d665a1b19301)

  Basically the message that we get is misleading for a deleted
  instance. This is because verify_instance queries the
  instance_mappings table which maintains a mapping of the deleted
  instances as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780481] [NEW] ubuntu/centos/debian: get_linux_distro has different behavior than platform.dist

2018-07-06 Thread Chad Smith
Public bug reported:

Recently cloudinit added get_linux_distro to replace platform.dist() 
functionality.
The behavior returned a tuple on ubuntu that doens't match the platform.dist 
module.

python3 -c 'from platform import dist; from cloudinit.util import 
get_linux_distro; print(dist()); print(get_linux_distro())'
('Ubuntu', '16.04', 'xenial')
('ubuntu', '16.04', 'x86_64')


This breaks ntp configuration in cloud-init on xenial, which checks
(_name, _version, codename) = util.system_info()['dist']
if codename == "xenial" and not util.system_is_snappy():
...


Also CentOS 7 behavior:
[root@c71 ~]# python -c 'from platform import dist; from cloudinit.util import 
get_linux_distro; print(dist()); print(get_linux_distro())'
('centos', '7.5.1804', 'Core')
('centos', '7', 'x86_64')

Debian stretch:
root@debStretch:~# python -c 'from platform import dist; 
print(dist());'('debian', '9.4', '')


openSuse 42.3:
sles42:~ # python3 -c 'from platform import dist; print(dist());'
('SuSE', '42.3', 'x86_64')

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Summary changed:

- ubuntu/centos: get_linux_distro has differnet behavior than  platform.dist
+ ubuntu/centos/debian: get_linux_distro has different behavior than 
platform.dist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1780481

Title:
  ubuntu/centos/debian: get_linux_distro has different behavior than
  platform.dist

Status in cloud-init:
  New

Bug description:
  Recently cloudinit added get_linux_distro to replace platform.dist() 
functionality.
  The behavior returned a tuple on ubuntu that doens't match the platform.dist 
module.

  python3 -c 'from platform import dist; from cloudinit.util import 
get_linux_distro; print(dist()); print(get_linux_distro())'
  ('Ubuntu', '16.04', 'xenial')
  ('ubuntu', '16.04', 'x86_64')


  This breaks ntp configuration in cloud-init on xenial, which checks
  (_name, _version, codename) = util.system_info()['dist']
  if codename == "xenial" and not util.system_is_snappy():
  ...

  
  Also CentOS 7 behavior:
  [root@c71 ~]# python -c 'from platform import dist; from cloudinit.util 
import get_linux_distro; print(dist()); print(get_linux_distro())'
  ('centos', '7.5.1804', 'Core')
  ('centos', '7', 'x86_64')

  Debian stretch:
  root@debStretch:~# python -c 'from platform import dist; 
print(dist());'('debian', '9.4', '')

  
  openSuse 42.3:
  sles42:~ # python3 -c 'from platform import dist; print(dist());'
  ('SuSE', '42.3', 'x86_64')

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1780481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780373] Re: server_group_members quota check failure with multi-create

2018-07-06 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1780373

Title:
  server_group_members quota check failure with multi-create

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  Circumstance:
  When multi-creating a quota-exceeding number of instances in a server group, 
it will pass server_group_members quota check.

  Actual result:
  Servers successfully created.

  Expected result:
  Raising QuotaExceeded API exception.

  Reproduce steps (Queen):
  1 nova server-group-create sg affinity (policy shouldn't matter)
  2 set in nova.conf server_group_members=2 (so we don't need to create too 
many servers to test)
  3 nova boot --flavor flavor --net net-name=netname --image image --max-count 
3 server-name
  Then we will see all 3 servers created successfully, violating 
server_group_members quota policy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1780373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780460] [NEW] Help text for encrypted volume type should be updated (LuksEncryptor)

2018-07-06 Thread Eric Harney
Public bug reported:

The help text in the dialog to edit encrypted volume types describes the
Provider field as:

"The Provider is the class providing encryption support (e.g.,
LuksEncryptor). "


This should reference "luks" instead of "LuksEncryptor" now.  Both work, but 
"luks" is simpler and should be the recommended value.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1780460

Title:
  Help text for encrypted volume type should be updated (LuksEncryptor)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The help text in the dialog to edit encrypted volume types describes
  the Provider field as:

  "The Provider is the class providing encryption support (e.g.,
  LuksEncryptor). "

  
  This should reference "luks" instead of "LuksEncryptor" now.  Both work, but 
"luks" is simpler and should be the recommended value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1780460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780457] [NEW] Help text for encrypted volume type has incorrect description

2018-07-06 Thread Eric Harney
Public bug reported:

The key size field has this description:

 The Key Size is the size of the encryption key, in bits (e.g., 128,
256). If the field is left empty, the provider default will be used.

This is not really correct, because 128 bit keys do not actually work
with LUKS as used by Cinder volumes.  It should only recommend 256 bit
keys.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1780457

Title:
  Help text for encrypted volume type has incorrect description

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The key size field has this description:

   The Key Size is the size of the encryption key, in bits (e.g., 128,
  256). If the field is left empty, the provider default will be used.

  This is not really correct, because 128 bit keys do not actually work
  with LUKS as used by Cinder volumes.  It should only recommend 256 bit
  keys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1780457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731072] Re: AllocationCandidates.get_by_filters returns garbage with multiple aggregates

2018-07-06 Thread Chris Dent
As the main issue here has been sort of addressed and there are many
related other issues, it's better to close this and deal with the more
granular issues as they come up.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1731072

Title:
  AllocationCandidates.get_by_filters returns garbage with multiple
  aggregates

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I set up a test scenario with multiple providers (sharing and non),
  across multiple aggregates.  Requesting allocation candidates gives
  some candidates as expected, but some are garbled.  Bad behaviors
  include:

  (1) When inventory in a given RC is provided both by a non-sharing and a 
sharing RP in an aggregate, the sharing RP is ignored in the results (this is 
tracked via https://bugs.launchpad.net/nova/+bug/1724613)
  (2) When inventory in a given RC is provided solely by a sharing RP, I don't 
get the expected candidate where that sharing RP provides that inventory and 
the rest comes from the non-sharing RP.
  (3) The above applies when there are multiple sharing RPs in the same 
aggregate providing that same shared resource.
  (4) ...and also when the sharing RPs provide different resources.

  And we get a couple of unexpected candidates that are really garbled:

  (5) Where there are multiple sharing RPs with different resources, one 
candidate has the expected resources from the non-sharing RP and one of the 
sharing RPs, but is missing the third requested resource entirely.
  (6) With that same setup, we get another candidate that has the expected 
resource from the non-sharing RP; but duplicated resources spread across 
multiple sharing RPs from *different* *aggregates*.  This one is also missing 
one of the requested resources entirely.

  I will post a commit shortly that demonstrates this behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1731072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1765376] Re: nova scheduler log contains html

2018-07-06 Thread Chris Dent
Marking as fix released. If it becomes an issue we can consider
backporting it.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1765376

Title:
  nova scheduler log contains html

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The nova scheduler log contains some HTML (see actual result). Log files 
should not contain any HTML. I suspect some wrong error message parsing here.

  
  Actual result
  =
  Log Message:
  
  2018-04-19 13:12:53.109 12125 WARNING nova.scheduler.client.report 
[req-a35b4d7e-9914-48f1-b912-27cedb6eebdd cd9715e9b4714bc6b4d77f15f12ba5a9 
fa976f761aad4d378706dfc26ddf6004 - default default] Unable to submit allocation 
for instance 6dc8e703-1174-499d-aa9b-4d05f83b7784 (409 
   
    409 Conflict
   
   
    409 Conflict
    There was a conflict when trying to complete your request.
  Unable to allocate inventory: Unable to create allocation for 'VCPU' on 
resource provider '322b4b21-f3ff-4d59-b6c8-8c1a9fe2b530'. The requested amount 
would violate inventory constraints.

   
  )
  

  Steps to reproduce
  ==
  This log line occurred after trying to live-migrate a vm from one node to 
another. obviously the request failed to do so.

  Expected result
  ===
  The log line should be better formatted. for example like this:

  
  2018-04-19 13:12:53.109 12125 WARNING nova.scheduler.client.report 
[req-a35b4d7e-9914-48f1-b912-27cedb6eebdd cd9715e9b4714bc6b4d77f15f12ba5a9 
fa976f761aad4d378706dfc26ddf6004 - default default] Unable to submit allocation 
for instance 6dc8e703-1174-499d-aa9b-4d05f83b7784 - HTTP Error 409 - There was 
a conflict when trying to complete your request. Unable to allocate inventory: 
Unable to create allocation for 'VCPU' on resource provider 
'322b4b21-f3ff-4d59-b6c8-8c1a9fe2b530'. The requested amount would violate 
inventory constraints.
  

  Environment
  ===
  Ubuntu 16.04 with the following packages from Ubuntu Cloud Archive:

  nova-api   2:16.0.4-0ubuntu1~cloud0
  nova-common2:16.0.4-0ubuntu1~cloud0
  nova-conductor 2:16.0.4-0ubuntu1~cloud0
  nova-consoleauth   2:16.0.4-0ubuntu1~cloud0
  nova-novncproxy2:16.0.4-0ubuntu1~cloud0
  nova-placement-api 2:16.0.4-0ubuntu1~cloud0
  nova-scheduler 2:16.0.4-0ubuntu1~cloud0
  python-nova2:16.0.4-0ubuntu1~cloud0
  python-novaclient  2:9.1.0-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1765376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780453] [NEW] openvswitch-agent doesn't try to rebind "binding_failed" ports on startup anymore

2018-07-06 Thread Ralf Haferkamp
Public bug reported:

When ports are created while the openvswitch agent is they're ending up
with "binding:vif_type=binding_failed". In the past upon the next start
of the agent it picked up those ports and would retry to bind them at
least once.

However, currently that is not happening anymore. The agent just logs a
message about the port not being bound. This is from the ovs-agent's log
directly after startup (9cdbb5cb-6e93-4043-bca9-b063cd43626d being the
port id of the "binding_failed" port):


Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None 
req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 None None] Agent rpc_loop - 
iteration:0 - port information retrieved. Elapsed:0.045 {{(pid=24189) rpc_loop 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2086}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None 
req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 None None] Starting to process devices 
in:{'current': set([u'9cdbb5cb-6e93-4043-bca9-b063cd43626d']), 'removed': 
set([]), 'added': set([u'9cdbb5cb-6e93-4043-bca9-b063cd43626d'])} {{(pid=24189) 
rpc_loop 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2093}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
neutron.api.rpc.handlers.resources_rpc [None 
req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 None None] 
neutron.api.rpc.handlers.resources_rpc.ResourcesPullRpcApi method bulk_pull 
called with arguments (, 
'Port') {'filter_kwargs': {'id': (u'9cdbb5cb-6e93-4043-bca9-b063cd43626d',)}} 
{{(pid=24189) wrapper /usr/lib/python2.7/site-packages/oslo_log/helpers.py:66}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
neutron.agent.resource_cache [None req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 
None None] Received new resource Port: 
Port(admin_state_up=True,allowed_address_pairs=[],binding=PortBinding,binding_levels=[],created_at=2018-07-06T14:54:09Z,data_plane_status=,description='',device_id='dhcp04771cbe-437e-5f2c-860d-465c2ee070fb-90f6f6f9-8fe5-4865-9550-39189780de7f',device_owner='network:dhcp',dhcp_options=[],distributed_bindings=[],dns=None,fixed_ips=[IPAllocation,IPAllocation],id=9cdbb5cb-6e93-4043-bca9-b063cd43626d,mac_address=fa:16:3e:b4:65:ba,name='',network_id=90f6f6f9-8fe5-4865-9550-39189780de7f,project_id='b5f8567fc10d4bebaf633e24afb353af',qos_policy_id=None,revision_number=2,security=PortSecurity(9cdbb5cb-6e93-4043-bca9-b063cd43626d),security_group_ids=set([]),status='DOWN',updated_at=2018-07-06T14:54:11Z)
 {{(pid=24189) record_resource_update 
/opt/stack/neutron/neutron/agent/resource_cache.py:187}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
neutron_lib.callbacks.manager [None req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 
None None] Notify callbacks 
['neutron.api.rpc.handlers.securitygroups_rpc.SecurityGroupServerAPIShim._handle_sg_member_update-8322652',
 
'neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSPluginApi._legacy_notifier-8205701']
 for Port, after_update {{(pid=24189) _notify_loop 
/usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:193}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
neutron.agent.resource_cache [None req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 
None None] 1 resources returned for queries set([('Port', ('id', 
(u'9cdbb5cb-6e93-4043-bca9-b063cd43626d',)))]) {{(pid=24189) 
_flood_cache_for_query /opt/stack/neutron/neutron/agent/resource_cache.py:85}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: WARNING 
neutron.agent.rpc [None req-21d4d33a-43f9-4100-a452-3ca6aa62b9e3 None None] 
Device 
Port(admin_state_up=True,allowed_address_pairs=[],binding=PortBinding,binding_levels=[],created_at=2018-07-06T14:54:09Z,data_plane_status=,description='',device_id='dhcp04771cbe-437e-5f2c-860d-465c2ee070fb-90f6f6f9-8fe5-4865-9550-39189780de7f',device_owner='network:dhcp',dhcp_options=[],distributed_bindings=[],dns=None,fixed_ips=[IPAllocation,IPAllocation],id=9cdbb5cb-6e93-4043-bca9-b063cd43626d,mac_address=fa:16:3e:b4:65:ba,name='',network_id=90f6f6f9-8fe5-4865-9550-39189780de7f,project_id='b5f8567fc10d4bebaf633e24afb353af',qos_policy_id=None,revision_number=2,security=PortSecurity(9cdbb5cb-6e93-4043-bca9-b063cd43626d),security_group_ids=set([]),status='DOWN',updated_at=2018-07-06T14:54:11Z)
 is not bound.
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): 
ListPortsCommand(bridge=br-int) {{(pid=24189) do_commit 
/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
Jul 06 14:54:50 net1 neutron-openvswitch-agent[24189]: DEBUG 
ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change 
{{(pid=24189) do_commit 
/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:121}}
Jul 06 14:54:50 

[Yahoo-eng-team] [Bug 1780441] [NEW] Rebuild does not respect number of PCIe devices

2018-07-06 Thread Oscar Kraemer
Public bug reported:

Description
===
When rebuilding a instance with a GPUs attached it may get additional GPUs if 
there are free available. This number can vary between rebuilds, most of the 
rebuilds it receive the same amount of GPUs as before the latest rebuild.

Step to reproduce
=
$ openstack flavor show 5fd13401-7daa-464d-acf1-432d29a3dd92
++---+
| Field  | Value |
++---+
| OS-FLV-DISABLED:disabled   | False |
| OS-FLV-EXT-DATA:ephemeral  | 0 |
| access_project_ids | None  |
| disk   | 80|
| id | 5fd13401-7daa-464d-acf1-432d29a3dd92  |
| name   | gpu.2.1gpu|
| os-flavor-access:is_public | True  |
| properties | gpu_m10='true', pci_passthrough:alias='M10:1' |
| ram| 5000  |
| rxtx_factor| 1.0   |
| swap   |   |
| vcpus  | 2 |
++---+

$ openstack server create my-gpu-instace --image CentOS-7 --network my-
project-network --flavor 5fd13401-7daa-464d-acf1-432d29a3dd92 --key-name
my-key --security-group default

On the gpu node:
[root@g1 ~]# virsh dumpxml instance-0001b22e |grep vfio
  

$ openstack server rebuild 29d5a9ba-0829-4e33-9d1c-4ee66b55a940

On the gpu node:
[root@g1 ~]# virsh dumpxml instance-0001b22e |grep vfio
  
  
  

* The database:
MariaDB [nova]> select * from pci_devices where 
instance_uuid='29d5a9ba-0829-4e33-9d1c-4ee66b55a940';
+-+-++-++-+--++---+--+--+-+---++--++---+-+
| created_at  | updated_at  | deleted_at | deleted | id | 
compute_node_id | address  | product_id | vendor_id | dev_type | dev_id 
  | label   | status| extra_info | instance_uuid
| request_id | numa_node | parent_addr |
+-+-++-++-+--++---+--+--+-+---++--++---+-+
| 2018-06-27 10:54:44 | 2018-07-04 12:12:57 | NULL   |   0 |  6 |   
   36 | :3e:00.0 | 13bd   | 10de  | type-PCI | pci__3e_00_0 
| label_10de_13bd | allocated | {} | 
29d5a9ba-0829-4e33-9d1c-4ee66b55a940 | NULL   | 0 | NULL|
| 2018-06-27 10:54:44 | 2018-07-06 11:00:21 | NULL   |   0 |  9 |   
   36 | :3f:00.0 | 13bd   | 10de  | type-PCI | pci__3f_00_0 
| label_10de_13bd | allocated | {} | 
29d5a9ba-0829-4e33-9d1c-4ee66b55a940 | NULL   | 0 | NULL|
| 2018-06-27 10:54:44 | 2018-07-04 12:16:31 | NULL   |   0 | 12 |   
   36 | :40:00.0 | 13bd   | 10de  | type-PCI | pci__40_00_0 
| label_10de_13bd | allocated | {} | 
29d5a9ba-0829-4e33-9d1c-4ee66b55a940 | NULL   | 0 | NULL|
+-+-++-++-+--++---+--+--+-+---++--++---+-+
3 rows in set (0.01 sec)


* After some additional rebuilds (5-10), 4 GPUs in the database but only
one in visible from virsh


MariaDB [nova]> select * from pci_devices where 
instance_uuid='29d5a9ba-0829-4e33-9d1c-4ee66b55a940';
+-+-++-++-+--++---+--+--+-+---++--++---+-+
| created_at  | updated_at  | deleted_at | deleted | id | 
compute_node_id | address  | product_id | vendor_id | dev_type | dev_id 
  | label   | status| extra_info | instance_uuid
| request_id | numa_node | parent_addr |
+-+-

[Yahoo-eng-team] [Bug 1748280] Re: 'nova reboot' on arm64 is just 'nova reboot --hard' but slower

2018-07-06 Thread Corey Bryant
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1748280

Title:
  'nova reboot' on arm64 is just 'nova reboot --hard' but slower

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  New

Bug description:
  Within Canonical's multi-architecture compute cloud, I observed that
  arm64 instances were taking an unreasonable amount of time to return
  from a 'nova reboot' (2 minutes +, vs. ~16s for POWER in the same
  cloud region).

  Digging into this, I find that a nova soft reboot request is doing
  nothing at all on arm64 (no events visible within the qemu guest, and
  no reboot is triggered within the guest), and then after some sort of
  timeout (~2m), 'nova reboot' falls back to a hard reset of the guest.

  Since this is entirely predictable on the arm64 platform (because
  there's no implementation of ACPI plumbed through), 'nova reboot' on
  arm64 should just skip straight to the hard reboot and save everyone
  some clock time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1748280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780428] [NEW] Mitigate OSSN-0075

2018-07-06 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/579507
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 5cc9d999352c21d3f4b5c39d3ea9d4378ca86544
Author: Abhishek Kekane 
Date:   Mon Jul 2 10:03:48 2018 +

Mitigate OSSN-0075

Modified the current ``glance-manage db purge`` command to eliminate images
table from purging the records. Added new command
``glance-manage db purge_images_table`` to purge the records from images
table.

DocImpact
SecurityImpact

Change-Id: Ie6641659b54543ed9f96c393d664e52a26bfaf6a
Implements: blueprint mitigate-ossn-0075

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: doc glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1780428

Title:
  Mitigate OSSN-0075

Status in Glance:
  New

Bug description:
  https://review.openstack.org/579507
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5cc9d999352c21d3f4b5c39d3ea9d4378ca86544
  Author: Abhishek Kekane 
  Date:   Mon Jul 2 10:03:48 2018 +

  Mitigate OSSN-0075
  
  Modified the current ``glance-manage db purge`` command to eliminate 
images
  table from purging the records. Added new command
  ``glance-manage db purge_images_table`` to purge the records from images
  table.
  
  DocImpact
  SecurityImpact
  
  Change-Id: Ie6641659b54543ed9f96c393d664e52a26bfaf6a
  Implements: blueprint mitigate-ossn-0075

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1780428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780407] [NEW] There are some errors in neutron_l3_agent and neutron_dhcp_agent after restarting open vswitch with dpdk

2018-07-06 Thread Nanfei Chen
Public bug reported:

Environment: all in one.
OpenStack: Queens
Open vSwitch: 2.9.0
DPDK: 17.11.2
QEMU: 2.11
Private network: client-net, 10.10.10.0/24, gateway 10.10.10.1, dhcp server 
10.10.10.2.

After restart the Open vSwitch with dpdk, there are some errors in
neutron-l3-agent.log and neutron-dhcp-agent.log. The detailed content is
as follows.

neutron-l3-agent.log:
2018-07-06 16:43:35.212 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
2018-07-06 16:43:35.214 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)
2018-07-06 16:43:37.216 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
2018-07-06 16:43:37.217 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)

neutron-dhcp-agent.log:
2018-07-06 16:43:35.211 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
2018-07-06 16:43:35.212 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)
2018-07-06 16:43:37.215 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
2018-07-06 16:43:37.216 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)

In the qrouter namespace, the status of the gateway interface for a private 
network is down. So the gateway interface can not communicate with the dhcp 
server interface and virtual machines any more.
root@B-6-7:~# ip netns
qrouter-45a13539-c98c-45e9-aab6-2db6dfd687bf
qdhcp-c9b5cf2d-4df9-47fb-865c-cdb6e7bacd07
root@B-6-7:~#
root@B-6-7:~# ip netns exec qrouter-45a13539-c98c-45e9-aab6-2db6dfd687bf ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
20: qr-ac3740b8-b4:  mtu 1450 qdisc pfifo_fast 
state DOWN group default qlen 1000
link/ether fa:16:3e:88:0b:cf brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 brd 10.10.10.255 scope global qr-ac3740b8-b4
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe88:bcf/64 scope link
   valid_lft forever preferred_lft forever
root@B-6-7:~#
root@B-6-7:~# ip netns exec qrouter-45a13539-c98c-45e9-aab6-2db6dfd687bf ping 
10.10.10.2
PING 10.10.10.2 (10.10.10.2) 56(84) bytes of data.
>From 10.10.10.1 icmp_seq=1 Destination Host Unreachable
>From 10.10.10.1 icmp_seq=2 Destination Host Unreachable
>From 10.10.10.1 icmp_seq=3 Destination Host Unreachable
^C
--- 10.10.10.2 ping statistics ---
6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4999ms
pipe 4

As the same, in the qdhcp namespace, the status of the dhcp server
interface is down too. The dhcp server can not work any more.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1780407

Title:
  There are some errors in neutron_l3_agent and neutron_dhcp_agent after
  restarting open vswitch with dpdk

Status in neutron:
  New

Bug description:
  Environment: all in one.
  OpenStack: Queens
  Open vSwitch: 2.9.0
  DPDK: 17.11.2
  QEMU: 2.11
  Private network: client-net, 10.10.10.0/24, gateway 10.10.10.1, dhcp server 
10.10.10.2.

  After restart the Open vSwitch with dpdk, there are some errors in
  neutron-l3-agent.log and neutron-dhcp-agent.log. The detailed content
  is as follows.

  neutron-l3-agent.log:
  2018-07-06 16:43:35.212 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
  2018-07-06 16:43:35.214 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)
  2018-07-06 16:43:37.216 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
  2018-07-06 16:43:37.217 34 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)

  neutron-dhcp-agent.log:
  2018-07-06 16:43:35.211 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
  2018-07-06 16:43:35.212 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)
  2018-07-06 16:43:37.215 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: send error: Connection refused
  2018-07-06 16:43:37.216 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection refused)

  In the qrouter namespace, the status of the gateway interface for a private 
network is down. So the gateway interface can not communicate with the dhcp 
server interfac

[Yahoo-eng-team] [Bug 1779300] Re: Libvirt LM rollback fails due to the destination domain being transient and unable to detach devices.

2018-07-06 Thread Lee Yarwood
*** This bug is a duplicate of bug 1669857 ***
https://bugs.launchpad.net/bugs/1669857

** This bug has been marked a duplicate of bug 1669857
   libvirt.VIR_DOMAIN_AFFECT_CONFIG provided when detaching volumes during LM 
rollback

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1779300

Title:
  Libvirt LM rollback fails due to the destination domain being
  transient and unable to detach devices.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Libvirt LM rollback fails due to the destination domain being transient and 
unable to detach devices.

  Steps to reproduce
  ==
  - Attempt to LM an instance with volumes attached
  - Cause the LM to fail

  Expected result
  ===
  Volumes disconnected from the destination host.

  Actual result
  =
  Volumes still connected to the destination host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 origin/master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + kvm

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1779300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779801] Re: L3 AttributeError in doc job

2018-07-06 Thread YAMAMOTO Takashi
I added neutron as neutron-vpnaas has the same issue.
https://review.openstack.org/#/c/579742/

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779801

Title:
  L3 AttributeError in doc job

Status in networking-midonet:
  In Progress
Status in networking-odl:
  Confirmed
Status in neutron:
  New

Bug description:
  e.g http://logs.openstack.org/87/199387/116/check/build-openstack-
  sphinx-docs/a509328/job-output.txt.gz

  2018-07-03 01:03:26.111244 | ubuntu-xenial | Warning, treated as error:
  2018-07-03 01:03:26.111548 | ubuntu-xenial | autodoc: failed to import module 
u'midonet.neutron.db.bgp_db_midonet'; the following exception was raised:
  2018-07-03 01:03:26.111691 | ubuntu-xenial | Traceback (most recent call 
last):
  2018-07-03 01:03:26.112024 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc/importer.py",
 line 152, in import_module
  2018-07-03 01:03:26.112140 | ubuntu-xenial | __import__(modname)
  2018-07-03 01:03:26.112514 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/midonet/neutron/db/bgp_db_midonet.py",
 line 22, in 
  2018-07-03 01:03:26.112701 | ubuntu-xenial | from 
neutron_dynamic_routing.db import bgp_db
  2018-07-03 01:03:26.113035 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron_dynamic_routing/db/bgp_db.py",
 line 21, in 
  2018-07-03 01:03:26.113165 | ubuntu-xenial | from neutron.db import 
l3_dvr_db
  2018-07-03 01:03:26.113413 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py", 
line 41, in 
  2018-07-03 01:03:26.113524 | ubuntu-xenial | from neutron.db import l3_db
  2018-07-03 01:03:26.113819 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron/db/l3_db.py", line 
51, in 
  2018-07-03 01:03:26.113955 | ubuntu-xenial | from neutron.extensions 
import l3
  2018-07-03 01:03:26.114260 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron/extensions/l3.py", 
line 23, in 
  2018-07-03 01:03:26.114405 | ubuntu-xenial | from neutron.api.v2 import 
resource_helper
  2018-07-03 01:03:26.114730 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron/api/v2/resource_helper.py",
 line 20, in 
  2018-07-03 01:03:26.114877 | ubuntu-xenial | from neutron.api import 
extensions
  2018-07-03 01:03:26.115169 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron/api/extensions.py", 
line 32, in 
  2018-07-03 01:03:26.115336 | ubuntu-xenial | from neutron.plugins.common 
import constants as const
  2018-07-03 01:03:26.115669 | ubuntu-xenial |   File 
"/home/zuul/.venv/local/lib/python2.7/site-packages/neutron/plugins/common/constants.py",
 line 28, in 
  2018-07-03 01:03:26.115768 | ubuntu-xenial | 'router': constants.L3,
  2018-07-03 01:03:26.115894 | ubuntu-xenial | AttributeError: 'module' object 
has no attribute 'L3'

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1779801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp