[Yahoo-eng-team] [Bug 1718125] [NEW] Missing some contents for glance install prerequisites

2017-09-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Description
===
Install glance followed by 
https://docs.openstack.org/glance/pike/install/install-rdo.html.
But it missed content for create glance database.

'To create the database, complete these steps:
Use the database access client to connect to the database server as the root 
user:

$ mysql -u root -p'

Environment
===
$ git log
commit f8426378f892f250391b3d1004e27725d462481f
Author: OpenStack Proposal Bot 
Date: Fri Sep 15 07:16:27 2017 +

Imported Translations from Zanata

For more information about this automatic import see:
https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

Change-Id: Ie31a9ea996d8e42530a37ed9a9616cc44ebe65c8

** Affects: glance
 Importance: Undecided
 Assignee: Eric Xie (eric-xie)
 Status: New

-- 
Missing some contents for glance install prerequisites
https://bugs.launchpad.net/bugs/1718125
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718125] Re: Missing some contents for glance install prerequisites

2017-09-19 Thread Eric Xie
** Project changed: openstack-manuals => glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1718125

Title:
  Missing some contents for glance install prerequisites

Status in Glance:
  New

Bug description:
  Description
  ===
  Install glance followed by 
https://docs.openstack.org/glance/pike/install/install-rdo.html.
  But it missed content for create glance database.

  'To create the database, complete these steps:
  Use the database access client to connect to the database server as the root 
user:

  $ mysql -u root -p'

  Environment
  ===
  $ git log
  commit f8426378f892f250391b3d1004e27725d462481f
  Author: OpenStack Proposal Bot 
  Date: Fri Sep 15 07:16:27 2017 +

  Imported Translations from Zanata

  For more information about this automatic import see:
  https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

  Change-Id: Ie31a9ea996d8e42530a37ed9a9616cc44ebe65c8

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1718125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717892] Re: hyperv: Driver does not report disk_available_least

2017-09-19 Thread Lucian Petrut
** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

** Changed in: compute-hyperv
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1717892

Title:
  hyperv: Driver does not report disk_available_least

Status in compute-hyperv:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  The Hyper-V driver does not report the disk_available_least field.
  Reporting the mentioned field can help mitigate the scheduling issue
  regarding hosts using the same shared storage.

  Steps to reproduce
  ==

  Have a compute node having X GB total storage (reported as local_gb),
  out of which, just 1 GB is actually free (unreported). The compute
  node also reports local_gb_used, which is the sum of the allocated
  nova instances' flavor disk sizes. (local_gb > local_gb_used).

  Try to spawn an instance with a flavor's disk higher than 1.or disk
  size of 2 GB on the host.

  Expected result
  ===

  Instance should be in ERROR state, it shouldn't be able to schedule to
  the compute node.

  Actual result
  =

  The instance is active.

  Environment
  ===

  OpenStack Pike.
  Hyper-V 2012 R2 compute node.

  Logs & Configs
  ==

  [1] compute node's resource view VS. actual reported resource view: 
http://paste.openstack.org/show/621318/
  [2] compute node's resources (nova hypervisor-show), spawning an instance: 
http://paste.openstack.org/show/621319/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1717892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718125] Re: Missing some contents for glance install prerequisites

2017-09-19 Thread Eric Xie
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Eric Xie (eric-xie)

** Summary changed:

- Missing some contents for glance install prerequisites
+ Missing some contents for install prerequisites

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718125

Title:
  Missing some contents for install prerequisites

Status in Glance:
  In Progress
Status in neutron:
  New

Bug description:
  Description
  ===
  Install glance followed by 
https://docs.openstack.org/glance/pike/install/install-rdo.html.
  But it missed content for create glance database.

  'To create the database, complete these steps:
  Use the database access client to connect to the database server as the root 
user:

  $ mysql -u root -p'

  Environment
  ===
  $ git log
  commit f8426378f892f250391b3d1004e27725d462481f
  Author: OpenStack Proposal Bot 
  Date: Fri Sep 15 07:16:27 2017 +

  Imported Translations from Zanata

  For more information about this automatic import see:
  https://docs.openstack.org/i18n/latest/reviewing-translation-import.html

  Change-Id: Ie31a9ea996d8e42530a37ed9a9616cc44ebe65c8

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1718125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718063] Re: logging change busted subport RPC push

2017-09-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/504524
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=458d38e2c73f49d2b137a8092b72c0ad9c69756b
Submitter: Jenkins
Branch:master

commit 458d38e2c73f49d2b137a8092b72c0ad9c69756b
Author: Kevin Benton 
Date:   Fri Sep 15 14:10:34 2017 -0700

Don't assume RPC push object has an ID

Test coverage is provided by scenario job. We just
need to get it voting. :)

Closes-Bug: #1718063
Change-Id: I85f638d22206e5621366475009519dfbc476


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718063

Title:
  logging change busted subport RPC push

Status in neutron:
  Fix Released

Bug description:
  Change I4499bb328f0aeb58fe583b83fb42cd2d26c1c4c1 incorrectly assumed
  that each resource would have an ID attribute. This is not true for
  subports so it was breaking push notifications for subports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718190] [NEW] Instance could not be created/launched

2017-09-19 Thread hassan
Public bug reported:

During creation of instance through CLI or dashboard, i am getting
Unexpected error in logs. please find command and other related
openstack command outputs


Command and error


#openstack server create --flavor 73c42e28-0d38-4bc4-ad32-61b08474cdfa
--image Roka-New --nic net-id=a973a522-3893-4f0a-a643-95cae029744a
--security-group default --key-name mykey instance-1 --debug

POST call to compute for 
http://172.168.1.129:8774/v2/e6da88076d2847aca65e41b1d2093396/servers used 
request id req-c809d780-5615-4521-8647-bcbfe24b3f26
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-c809d780-5615-4521-8647-bcbfe24b3f26)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, 
in run
return super(Command, self).run(parsed_args)
  File "/usr/lib/python2.7/site-packages/cliff/display.py", line 112, in run
column_names, data = self.take_action(parsed_args)
  File "/usr/lib/python2.7/site-packages/openstackclient/compute/v2/server.py", 
line 627, in take_action
server = compute_client.servers.create(*boot_args, **boot_kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1416, 
in create
**boot_kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 779, 
in _boot
return_raw=return_raw, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 361, in 
_create
resp, body = self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 223, 
in post
return self.request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 80, in 
request
raise exceptions.from_response(resp, body, url, method)


other related command results
_

[root@controller ~]# . /home/admin-openrc

[root@controller ~]# openstack flavor list
++-+-+--+---+---+---+
| ID | Name| RAM | Disk | Ephemeral | VCPUs | Is Public |
++-+-+--+---+---+---+
| 0  | m1.nano |  64 |1 | 0 | 1 | True  |
++-+-+--+---+---+---+

[root@controller log]# openstack project list
+--+--+
| ID   | Name |
+--+--+
| 2cf19b21164e432fb84cfed281481737 | service  |
| 82df21eecb184130b15fbf337e3fcfbf | Roka2017 |
| e6da88076d2847aca65e41b1d2093396 | demo |
| f045d9ebb6f64ef0bfa580b3bc7a312e | admin|
+--+--+

[root@controller log]# openstack network list
+--+--+--+
| ID   | Name | Subnets 
 |
+--+--+--+
| 3cb5b804-f585-4455-92a4-360cefbb96f8 | outside  | 
d014432c-ec96-48c8-9a83-adb69d18d943 |
| a973a522-3893-4f0a-a643-95cae029744a | demo-net | 
ec178c3e-8f99-428c-871c-3bb02c572d9d |
+--+--+--+

[root@controller log]# openstack keypair list
+---+-+
| Name  | Fingerprint |
+---+-+
| mykey | e9:74:b4:0b:bb:7f:68:5f:8d:ab:9e:ed:18:f5:7e:58 |
+---+-+

[root@controller log]# openstack image list
+--+--++
| ID   | Name | Status |
+--+--++
| c7f1f1b3-4ad8-4eb1-ac31-5e1171fa4593 | New-one  | active |
| 3195c8b9-cddb-40cd-be63-e09117308a84 | RHEL7| active |
| fdbb6f71-e22f-4fb0-bdf6-3826a0a729c8 | Roka-New | active |
+--+--++


[root@controller ~]# . /home/demo-openrc
[root@controller ~]# openstack flavor list

+--+---+--+--+---+---+---+
| ID   | Name  |  RAM | Disk | Ephemeral | 
VCPUs | Is Public |
+--+---+--+--+---+---+---+
| 0| m1.nano   |   64 |1 | 0 |  
   1 | True  |
| 73c42e28-0d38-4bc4-ad32-61b08474cdfa | flavor-1G | 1024 |1 | 0 |  
   1 | False |
+--+---+--+--+---+---+---+

[Yahoo-eng-team] [Bug 1715545] Re: outdated URLs in docs

2017-09-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/502017
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2fce8a1396816a3917606b8d736f3d5b57ae7f56
Submitter: Jenkins
Branch:master

commit 2fce8a1396816a3917606b8d736f3d5b57ae7f56
Author: Takashi NATSUME 
Date:   Fri Sep 8 17:57:38 2017 +0900

Fix the ocata config-reference URLs

Replace the ocata config-reference URLs with
URLs in each project repo.

Change-Id: I48d7c77a6e0eaaf0efe66f848f45ae99007577e1
Closes-Bug: #1715545


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715545

Title:
  outdated URLs in docs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The nova repo contains links to the ocata config-reference like:

  https://docs.openstack.org/ocata/config-
  reference/compute/hypervisors.html

  For pike, the config-reference has moved to the project repos as part
  of the large reorg. All these links should be changed to the new
  location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717892] Re: hyperv: Driver does not report disk_available_least

2017-09-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/504905
Committed: 
https://git.openstack.org/cgit/openstack/compute-hyperv/commit/?id=6479f5539d5dc9e60467abc6bafdc6915320ee13
Submitter: Jenkins
Branch:master

commit 6479f5539d5dc9e60467abc6bafdc6915320ee13
Author: Claudiu Belu 
Date:   Mon Sep 18 13:19:59 2017 +0300

hyperv: report disk_available_least field

Reporting the disk_available_least field can help in making sure
the scheduler doesn't pick a host that cannot fit a specific flavor's
disk.

The reported local_gb_used is calculated based on the instances spawned
by nova on a certain compute node, and might not reflect the actual
reality, especially on shared storage scenarios.

Change-Id: I20992acef119f11f6584094438043a760fc4a287
Closes-Bug: #1717892


** Changed in: compute-hyperv
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1717892

Title:
  hyperv: Driver does not report disk_available_least

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  The Hyper-V driver does not report the disk_available_least field.
  Reporting the mentioned field can help mitigate the scheduling issue
  regarding hosts using the same shared storage.

  Steps to reproduce
  ==

  Have a compute node having X GB total storage (reported as local_gb),
  out of which, just 1 GB is actually free (unreported). The compute
  node also reports local_gb_used, which is the sum of the allocated
  nova instances' flavor disk sizes. (local_gb > local_gb_used).

  Try to spawn an instance with a flavor's disk higher than 1.or disk
  size of 2 GB on the host.

  Expected result
  ===

  Instance should be in ERROR state, it shouldn't be able to schedule to
  the compute node.

  Actual result
  =

  The instance is active.

  Environment
  ===

  OpenStack Pike.
  Hyper-V 2012 R2 compute node.

  Logs & Configs
  ==

  [1] compute node's resource view VS. actual reported resource view: 
http://paste.openstack.org/show/621318/
  [2] compute node's resources (nova hypervisor-show), spawning an instance: 
http://paste.openstack.org/show/621319/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1717892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718212] [NEW] Compute resource tracker does not report correct information for drivers such as vSphere

2017-09-19 Thread Jay Jahns
Public bug reported:

The compute resource tracker is ignoring values coming from compute
drivers such as vSphere. Specifically, the resource tracker is accepting
disk_total from the driver, but not disk_available and disk_used.

We've checked the vSphere driver, and the driver is indeed reporting the
correct values for this.


data["vcpus"] = stats['vcpus']
data["disk_total"] = capacity / units.Gi
data["disk_available"] = freespace / units.Gi
data["disk_used"] = data["disk_total"] - data["disk_available"]
data["host_memory_total"] = stats['mem']['total']
data["host_memory_free"] = stats['mem']['free']
data["hypervisor_type"] = about_info.name
data["hypervisor_version"] = versionutils.convert_version_to_int(
str(about_info.version))
data["hypervisor_hostname"] = self._host_name
data["supported_instances"] = [
(arch.I686, hv_type.VMWARE, vm_mode.HVM),
(arch.X86_64, hv_type.VMWARE, vm_mode.HVM)]


It looks like a patch was made to the resource tracker @
https://review.openstack.org/#/c/126237/ but the change was voted down
and not approved.

Since vSphere is reporting the correct data, maybe someone can tell us
why Nova is ignoring these values and provide us with some guidance on
correcting the problem in the driver, if that's where the change is
needed.

This is affecting our production environment because I use upstream code
and I have clusters that get overcommitted as a direct result of the
scheduler seeing that no disk is "used"

This should get extreme priority since it affects the ability to launch
things!!!

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718212

Title:
  Compute resource tracker does not report correct information for
  drivers such as vSphere

Status in OpenStack Compute (nova):
  New

Bug description:
  The compute resource tracker is ignoring values coming from compute
  drivers such as vSphere. Specifically, the resource tracker is
  accepting disk_total from the driver, but not disk_available and
  disk_used.

  We've checked the vSphere driver, and the driver is indeed reporting
  the correct values for this.

  
  data["vcpus"] = stats['vcpus']
  data["disk_total"] = capacity / units.Gi
  data["disk_available"] = freespace / units.Gi
  data["disk_used"] = data["disk_total"] - data["disk_available"]
  data["host_memory_total"] = stats['mem']['total']
  data["host_memory_free"] = stats['mem']['free']
  data["hypervisor_type"] = about_info.name
  data["hypervisor_version"] = versionutils.convert_version_to_int(
  str(about_info.version))
  data["hypervisor_hostname"] = self._host_name
  data["supported_instances"] = [
  (arch.I686, hv_type.VMWARE, vm_mode.HVM),
  (arch.X86_64, hv_type.VMWARE, vm_mode.HVM)]
  

  It looks like a patch was made to the resource tracker @
  https://review.openstack.org/#/c/126237/ but the change was voted down
  and not approved.

  Since vSphere is reporting the correct data, maybe someone can tell us
  why Nova is ignoring these values and provide us with some guidance on
  correcting the problem in the driver, if that's where the change is
  needed.

  This is affecting our production environment because I use upstream
  code and I have clusters that get overcommitted as a direct result of
  the scheduler seeing that no disk is "used"

  This should get extreme priority since it affects the ability to
  launch things!!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1718212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668410] Re: [SRU] Infinite loop trying to delete deleted HA router

2017-09-19 Thread Ryan Beisner
This bug was fixed in the package neutron - 2:8.4.0-0ubuntu5~cloud0
---

 neutron (2:8.4.0-0ubuntu5~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:8.4.0-0ubuntu5) xenial; urgency=medium
 .
   * d/p/l3-ha-don-t-send-routers-without-_ha_interface.patch: Backport fix for
 l3 ha: don't send routers without '_ha_interface' (LP: #1668410)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668410

Title:
  [SRU] Infinite loop trying to delete deleted HA router

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix
Status in neutron package in Ubuntu:
  Invalid
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  [Descriptoin]

  When deleting a router the logfile is filled up. See full log -
  http://paste.ubuntu.com/25429257/

  I can see the error 'Error while deleting router
  c0dab368-5ac8-4996-88c9-f5d345a774a6' occured 3343386 times from
  _safe_router_removed() [1]:

  $ grep -r 'Error while deleting router c0dab368-5ac8-4996-88c9-f5d345a774a6' 
|wc -l
  3343386

  This _safe_router_removed() is invoked by L488 [2], if
  _safe_router_removed() goes wrong it will return False, then
  self._resync_router(update) [3] will make the code
  _safe_router_removed be run again and again. So we saw so many errors
  'Error while deleting router X'.

  [1] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L361
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488
  [3] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L457

  [Test Case]

  That's because race condition between neutron server and L3 agent,
  after neutron server deletes HA interfaces the L3 agent may sync a HA
  router without HA interface info (just need to trigger L708[1] after
  deleting HA interfaces and before deleting HA router). If we delete HA
  router at this time, this problem will happen. So test case we design
  is as below:

  1, First update fixed package, and restart neutron-server by 'sudo
  service neutron-server restart'

  2, Create ha_router

  neutron router-create harouter --ha=True

  3, Delete ports associated with ha_router before deleting ha_router

  neutron router-port-list harouter |grep 'HA port' |awk '{print $2}' |xargs -l 
neutron port-delete
  neutron router-port-list harouter

  4, Update ha_router to trigger l3-agent to update ha_router info
  without ha_port into self.router_info

  neutron router-update harouter --description=test

  5, Delete ha_router this time

  neutron router-delete harouter

  [1] https://github.com/openstack/neutron/blob/mitaka-
  eol/neutron/db/l3_hamode_db.py#L708

  [Regression Potential]

  The fixed patch [1] for neutron-server will no longer return ha_router
  which is missing ha_ports, so L488 will no longer have chance to call
  _safe_router_removed() for a ha_router, so the problem has been
  fundamentally fixed by this patch and no regression potential.

  Besides, this fixed patch has been in mitaka-eol branch now, and
  neutron-server mitaka package is based on neutron-8.4.0, so we need to
  backport it to xenial and mitaka.

  $ git tag --contains 8c77ee6b20dd38cc0246e854711cb91cffe3a069
  mitaka-eol

  [1] https://review.openstack.org/#/c/440799/2/neutron/db/l3_hamode_db.py
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1668410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718226] [NEW] bdm is wastefully loaded for versioned instance notifications

2017-09-19 Thread Balazs Gibizer
Public bug reported:

The block device mapping is included in the instance versioned
notifications with I18e7483ec9a484a660e1d306fdc0986e1d5f952b. This means
that if so configured nova will load the BDM from the DB for every
instance notification. This is wasteful when the BDM is already loaded
in the context of the call from where the notification is emitted. In
this case that BDM should be reused to remove the unnecessary DB load.

** Affects: nova
 Importance: Undecided
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: In Progress


** Tags: notifications

** Changed in: nova
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Tags added: notifications

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718226

Title:
  bdm is wastefully loaded for versioned instance notifications

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The block device mapping is included in the instance versioned
  notifications with I18e7483ec9a484a660e1d306fdc0986e1d5f952b. This
  means that if so configured nova will load the BDM from the DB for
  every instance notification. This is wasteful when the BDM is already
  loaded in the context of the call from where the notification is
  emitted. In this case that BDM should be reused to remove the
  unnecessary DB load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1718226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718235] [NEW] ovs_bridge native doesn't reraise RuntimeError if _get_dp_by_dpid fails

2017-09-19 Thread Jakub Libosvar
Public bug reported:

Typically DPID starts with zeros while code trims the prefixed zeros and
then comparison fails masking the real OVS failure.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718235

Title:
  ovs_bridge native doesn't reraise RuntimeError if _get_dp_by_dpid
  fails

Status in neutron:
  In Progress

Bug description:
  Typically DPID starts with zeros while code trims the prefixed zeros
  and then comparison fails masking the real OVS failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704914] Re: Extend Quota API to report usage statistics

2017-09-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/502174
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7adb297fed6865a4e0bfb498d7e8d019c588c522
Submitter: Jenkins
Branch:master

commit 7adb297fed6865a4e0bfb498d7e8d019c588c522
Author: Boden R 
Date:   Fri Sep 8 14:30:36 2017 -0600

doc for quota details extension

This patch adds a blurb about the quota detail API extension to wrap
up the doc aspect of this feature.

Change-Id: I3de3a686d70d74e10d01880c1ec0f4dd5b047038
Closes-Bug: #1704914


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1704914

Title:
  Extend Quota API to report usage statistics

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/383673
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit a8109af65f275ec1b2e725695bf3bb9976f22ae3
  Author: Sergey Belous 
  Date:   Fri Oct 7 14:29:07 2016 +0300

  Extend Quota API to report usage statistics
  
  Extend existing quota api to report a quota set. The quota set
  will contain a set of resources and its corresponding reservation,
  limits and in_use count for each tenant.
  
  DocImpact:Documentation describing the new API as well as the new
  information that it exposes.
  APIImpact
  
  Co-Authored-By: Prince Boateng
  Change-Id: Ief2a6a4d2d7085e2a9dcd901123bc4fe6ac7ca22
  Related-bug: #1599488

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1704914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714068] Re: AttributeError in get_device_details when segment=None

2017-09-19 Thread Brian Haley
Don't know why the Closes-bug in the commit message didn't trigger an
update here, but this merged to master:

https://review.openstack.org/#/c/501852/

And this is proposed to Pike:

https://review.openstack.org/#/c/504515/

** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714068

Title:
  AttributeError in get_device_details when segment=None

Status in neutron:
  In Progress

Bug description:
  Found an AttributeError in this tempest log file:

  http://logs.openstack.org/67/347867/52/check/gate-tempest-dsvm-
  neutron-dvr-ubuntu-xenial/64d8d9a/logs/screen-q-agt.txt.gz

  Aug 29 18:57:12.585412 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc [None 
req-0de9bdcb-9659-4971-9e03-4e2849af0169 None None] Failed to get details for 
device 4fe2030b-4af8-47e7-8dd3-52b544918e16: AttributeError: 'NoneType' object 
has no attribute 'network_type'
  Aug 29 18:57:12.585578 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc Traceback (most 
recent call last):
  Aug 29 18:57:12.585712 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc   File 
"/opt/stack/new/neutron/neutron/agent/rpc.py", line 216, in 
get_devices_details_list_and_failed_devices
  Aug 29 18:57:12.585858 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc 
self.get_device_details(context, device, agent_id, host))
  Aug 29 18:57:12.585987 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc   File 
"/opt/stack/new/neutron/neutron/agent/rpc.py", line 244, in get_device_details
  Aug 29 18:57:12.586162 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc 'network_type': 
segment.network_type,
  Aug 29 18:57:12.586289 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc AttributeError: 
'NoneType' object has no attribute 'network_type'
  Aug 29 18:57:12.586414 ubuntu-xenial-rax-iad-10687932 
neutron-openvswitch-agent[28658]: ERROR neutron.agent.rpc

  From the same log just a second earlier, we can see that the port was
  updated, and the 'segment' field of the port binding_level was
  cleared:

  Aug 29 18:57:11.569684 ubuntu-xenial-rax-iad-10687932 neutron-
  openvswitch-agent[28658]: DEBUG neutron.agent.resource_cache [None
  req-1d914886-cbd7-4568-9290-bcbc98bd7ba6 None None] Resource Port
  4fe2030b-4af8-47e7-8dd3-52b544918e16 updated (revision_number 4->5).
  Old fields: {'status': u'DOWN', 'binding_levels':
  [PortBindingLevel(driver='openvswitch',host='ubuntu-xenial-rax-
  iad-10687932',level=0,port_id=4fe2030b-
  4af8-47e7-8dd3-52b544918e16,segment=NetworkSegment(ef53b2b6-f51e-495b-
  9dda-b174fbed1122))]} New fields: {'status': u'ACTIVE',
  'binding_levels': [PortBindingLevel(driver='openvswitch',host='ubuntu-
  xenial-rax-iad-10687932',level=0,port_id=4fe2030b-
  4af8-47e7-8dd3-52b544918e16,segment=None)]} {{(pid=28658)
  record_resource_update
  /opt/stack/new/neutron/neutron/agent/resource_cache.py:185}}

  The code in question doesn't check if 'segment' is valid before using
  it, and I'm not sure if such a simple change as that is appropriate to
  fix it.

  This code was added in the push notifications changes:

  commit c3db9d6b0b990da6664955e4ce1c72758dc600e1
  Author: Kevin Benton 
  Date:   Sun Jan 22 17:01:47 2017 -0800

  Use push-notificates for OVSPluginAPI
  
  Replace the calls to the OVSPluginAPI info retrieval functions
  with reads directly from the push notification cache.
  
  Since we now depend on the cache for the source of truth, the
  'port_update'/'port_delete'/'network_update' handlers are configured
  to be called whenever the cache receives a corresponding resource update.
  The OVS agent will no longer subscribe to topic notifications for ports
  or networks from the legacy notification API.
  
  Partially-Implements: blueprint push-notifications
  Change-Id: Ib2234ec1f5d328649c6bb1c3fe07799d3e351f48

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564887] Re: [api-ref]Neutron v2 Layer-3 extension - `routes` attribute not helpful

2017-09-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/503052
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=1b4ce7a7e3a5b1d0c46e629cc9f939ee318b1769
Submitter: Jenkins
Branch:master

commit 1b4ce7a7e3a5b1d0c46e629cc9f939ee318b1769
Author: Boden R 
Date:   Tue Sep 12 09:54:33 2017 -0600

finish routes api ref

Our API ref for 'routes' was missing some sample route objects in
request/response JSON samples, as well as documenting the fact that
routes can be updated with PUT requests on routers.

Change-Id: I50f6d8193163586ab6f4f3bf61830fc9407ff7a2
Closes-Bug: #1564887


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564887

Title:
  [api-ref]Neutron v2 Layer-3 extension - `routes` attribute not helpful

Status in neutron:
  Fix Released

Bug description:
  In all the router responses, you see a `routes` attribute. This is the
  explanation for it:

  routes  plain   xsd:listThe extra routes configuration for L3
  router.

  1. What exactly is the structure of this value? "xsd:list" means nothing to 
me. Are there nested objects, nested strings?
  2. The description doesn't explain how a route functions.

  Also, how do routes get set on routers? It's not included as an input
  param for POST or PUT operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641250] Re: NG details view route should have different name

2017-09-19 Thread Graham Hayes
** Also affects: designate-dashboard
   Importance: Undecided
   Status: New

** Changed in: designate-dashboard
   Importance: Undecided => Critical

** Changed in: designate-dashboard
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1641250

Title:
  NG details view route should have different name

Status in Designate Dashboard:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum UI:
  Fix Released
Status in senlin-dashboard:
  Fix Released
Status in UI Cookiecutter:
  Fix Released
Status in Zun UI:
  Fix Released

Bug description:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/core.module.js#L56

  path includes the name "project" but detail views can also come from
  "admin" and "identity". Change the name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1641250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608351] Re: Volume Type Extra Spec key-value creation doesn't validate blank spaces

2017-09-19 Thread Gary W. Smith
When I try this, horizon rejects the creation, indicating that the field
is required (for both the extra spec key and value). Closing as this is
not reproducible

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1608351

Title:
  Volume Type Extra Spec key-value creation doesn't validate blank
  spaces

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Volume Type Extra Spec key-value creation doesn't validate blank spaces. 
  Blank spaces should not be allowed as key-value pairs values. There must be a 
validation to prompt use for entering a invalid values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1608351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606123] Re: Connection Error in Resource Usage Panel

2017-09-19 Thread Gary W. Smith
Ceilometer is no longer part of horizon. Closing.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606123

Title:
  Connection Error in Resource Usage Panel

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The Resource-Usage panel some times throws the following error

  ConnectionError: HTTPSConnectionPool(host='public.fuel.local',
  port=8777): Max retries exceeded with url:
  
/v2/meters/disk.write.bytes/statistics?q.field=project_id&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.field=timestamp&q.op=eq&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.op=ge&q.op=le&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.
 
type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.type=&q.value=ddf0b064e5b045c4b064ec7344100f81&q.value=2016-07-21+05%3A46%3A26.965056%2B00%3A00&q.value=2016-07-22+05%3A46%3A26.965056%2B00%3A00&q.value=2016-07-21+05%3A56%3A08.129089%2B00%3A00&q.value=2016-07-22+05%3A56%3A08.129089%2B00%3A00&q.value=2016-07-21+05%3A57%3A02.799849%2B00%3A00&q.value=2016-07-22+05%3A57%3A02.799849%2B00%3A00&q.value=2016-07-15+05%3A57%3A30.502598%2B00%3A00&q.value=2016-07-22+05%3A57%3A30.502598%2B00%3A00&q.value=2016-07-15+05%3A58%3A12.655724%2B00%3A00&q.value=2016-07-22+05%3A58%3A12.655724%2B00%3A00&q.value=2016-07-15+05%3A58%3A26.069241%2B00%3A00&q.value=2016-07-22+05%3A58%3A26.069241%2B00%3A00&q.value=2016-07-15+05%3A58%3A57.687257%2B00%3A00&q.value=2016-07-22+05%3A58%3A57.687257%2B00%3A00&q.value=2016-07-15+05%3A59%3A11.241494%2B00%3A00&q.value=2016-07-22+05%3A59%3A11.241494%2B00%3A00&q.valu
 
e=2016-07-15+05%3A59%3A41.254137%2B00%3A00&q.value=2016-07-22+05%3A59%3A41.254137%2B00%3A00&q.value=2016-07-15+06%3A02%3A30.960500%2B00%3A00&q.value=2016-07-22+06%3A02%3A30.960500%2B00%3A00&q.value=2016-07-21+06%3A24%3A35.926004%2B00%3A00&q.value=2016-07-22+06%3A24%3A35.926004%2B00%3A00&q.value=2016-07-15+06%3A24%3A55.701449%2B00%3A00&q.value=2016-07-22+06%3A24%3A55.701449%2B00%3A00&q.value=2016-07-24+05%3A13%3A02.620998%2B00%3A00&q.value=2016-07-25+05%3A13%3A02.620998%2B00%3A00&q.value=2016-07-18+05%3A16%3A35.441166%2B00%3A00&q.value=2016-07-25+05%3A16%3A35.441166%2B00%3A00&q.value=2016-07-18+05%3A17%3A05.966066%2B00%3A00&q.value=2016-07-25+05%3A17%3A05.966066%2B00%3A00&q.value=2016-07-24+06%3A13%3A42.435985%2B00%3A00&q.value=2016-07-25+06%3A13%3A42.435985%2B00%3A00&period=86400
  (Caused by : [Errno 110] Connection timed out)

  
  All though the cli commands ceilometer meter-list works fine , and most of 
the time this error leads to something went wrong page.And i think this error 
will be
  most commonly seen when there is a lot of data to be fetched

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553904] Re: cloud_admin not work in Horizon

2017-09-19 Thread Gary W. Smith
Since this is old and appears to be a configuration issue, marking as
invalid. If you can still reproduce this on a current version when
following the procedures outlined in the comments, feel free to reopen
the bug.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553904

Title:
  cloud_admin not work in Horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  1) 
  I download  and build environment Horizon with Github
  https://github.com/openstack/horizon

  2) 
  I configure my horizon with URL below
  https://wiki.openstack.org/wiki/Horizon/DomainWorkFlow

  If my keystone policy use origin policy, it's OKay.
  But when I change policy to policy.v3cloudsample.json

  the domain/project/user/group function not work.

  seems like auth_token not domain scope

  3)
  In this commit https://review.openstack.org/#/c/141153/
  the problem looks like resolved.
  why I got this failure, what do I missed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717893] Re: unable to retrieve the ports

2017-09-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/504916
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=61ad9f69bde5180af195a254038ca8280a0ffbb9
Submitter: Jenkins
Branch:master

commit 61ad9f69bde5180af195a254038ca8280a0ffbb9
Author: Akihiro Motoki 
Date:   Tue Sep 19 18:01:52 2017 +

Fix a bug to unable to retrieve ports when no trunk ext

When neutron trunk extension is not enabled, we cannot retrieve
network ports for a new instance. The current version of
port_list_with_trunk_types assumes the trunk extension is always
available, so if it is not enabled we cannot retrieve network ports.
This commit changes the logic to check whether the trunk extension
is enabled and if not it does not retrieve trunk information.

Change-Id: I2aa476790820f9512fe4728e0a806bd543021b0b
Closes-Bug: #1717893


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1717893

Title:
  unable to retrieve the ports

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Pike Master release,

  When you click the compute --> Instance and try to Launch Instance,
  you can see the error message on the dashboard.

  Error: Unable to retrieve the ports

  https://review.openstack.org/#/c/504916/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1717893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580746] Re: Wrong Hypervisor is being returned QEMU instead of KVM

2017-09-19 Thread Gary W. Smith
** Changed in: horizon
   Status: New => Opinion

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580746

Title:
  Wrong Hypervisor is being returned QEMU instead of KVM

Status in OpenStack Dashboard (Horizon):
  Opinion
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  nova.conf is configured with virt_type=kvm

  When running hypervisor-show XXX | grep hypervisor_type

  The result is always presented as QEMU

  >>> | hypervisor_type   | QEMU

  I gather that this is also the same is presented in Horizon on the
  Hypervisors tab.

  More information can be found in this thread

  http://lists.openstack.org/pipermail/openstack-
  operators/2016-May/010310.html

  I think this is visible on each and every version of OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1580746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527121] Re: cells: instance can not evacuate in cells

2017-09-19 Thread Gary W. Smith
Moving to nova, as this bug report appears unrelated to horizon

** Project changed: horizon => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527121

Title:
  cells: instance can not evacuate in cells

Status in OpenStack Compute (nova):
  New

Bug description:
  1. version
  kilo 2015.1.0

  There has one api cell and two child cell and four compute nodes,api cell has 
not compute node.
  I boot an instance and evacuate the instance, then return error:
  [root@apicell ~(keystone_admin)]# nova evacuate test_vm5
  ERROR (NotFound): The resource could not be found. (HTTP 404) (Request-ID: 
req-63786e5b-1199-4e2a-a5c0-a1efde0398a8)

  the error log in api cell nova-api.log in:
  http://paste.openstack.org/show/482160/

  The reason is that  nova api in api cell  first check compute node 
nova-compute status before evacuate the instance , it calls 
db.service_get_by_compute_host function, but the api cell db has not  compute 
node service record. Thus resulting in failure evacuate:
   Caught error: Compute host CL-SBCJ-5-3-4 could not be found

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718282] [NEW] Port update failed with 500 when trying to recreate default security group

2017-09-19 Thread Ihar Hrachyshka
Public bug reported:

On port update, default security group may be missing. In this case,
port update will first create the group, then proceed to port object.
The problem is that when it recreates the group, it uses AFTER_UPDATE
event, which contradicts the transactional semantics of
_ensure_default_security_group_handler.

Logs wise, we get this in neutron-server log:

Sep 14 12:03:03.604813 ubuntu-xenial-2-node-rax-dfw-10932230 neutron-
server[30503]: WARNING neutron.plugins.ml2.ovo_rpc [None req-
71600acd-c114-4dbd-a599-a9126fae14fb tempest-
NetworkDefaultSecGroupTest-1846858447 tempest-
NetworkDefaultSecGroupTest-1846858447] This handler is supposed to
handle AFTER events, as in 'AFTER it's committed', not BEFORE. Offending
resource event: security_group, after_create. Location:

And then later:

Sep 14 12:03:04.038599 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1332, in 
update_port
Sep 14 12:03:04.038761 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
context.session.expire(port_db)
Sep 14 12:03:04.038924 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1533, 
in expire
Sep 14 12:03:04.039083 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
self._expire_state(state, attribute_names)
Sep 14 12:03:04.039243 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1536, 
in _expire_state
Sep 14 12:03:04.039406 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
self._validate_persistent(state)
Sep 14 12:03:04.041280 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1976, 
in _validate_persistent
Sep 14 12:03:04.041453 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
state_str(state))
Sep 14 12:03:04.041658 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
InvalidRequestError: Instance '' is not persistent 
within this Session

Logs can be found in: http://logs.openstack.org/21/504021/1/check/gate-
tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-
nv/c6647c4/logs/screen-q-svc.txt.gz#_Sep_14_12_03_04_041658

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: Confirmed


** Tags: db gate-failure sg-fw

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Tags added: gate-failure sg-fw

** Tags added: db

** Bug watch added: Red Hat Bugzilla #1493175
   https://bugzilla.redhat.com/show_bug.cgi?id=1493175

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718282

Title:
  Port update failed with 500 when trying to recreate default security
  group

Status in neutron:
  Confirmed

Bug description:
  On port update, default security group may be missing. In this case,
  port update will first create the group, then proceed to port object.
  The problem is that when it recreates the group, it uses AFTER_UPDATE
  event, which contradicts the transactional semantics of
  _ensure_default_security_group_handler.

  Logs wise, we get this in neutron-server log:

  Sep 14 12:03:03.604813 ubuntu-xenial-2-node-rax-dfw-10932230 neutron-
  server[30503]: WARNING neutron.plugins.ml2.ovo_rpc [None req-
  71600acd-c114-4dbd-a599-a9126fae14fb tempest-
  NetworkDefaultSecGroupTest-1846858447 tempest-
  NetworkDefaultSecGroupTest-1846858447] This handler is supposed to
  handle AFTER events, as in 'AFTER it's committed', not BEFORE.
  Offending resource event: security_group, after_create. Location:

  And then later:

  Sep 14 12:03:04.038599 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1332, in 
update_port
  Sep 14 12:03:04.038761 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation 
context.session.expire(port_db)
  Sep 14 12:03:04.038924 ubuntu-xenial-2-node-rax-dfw-10932230 
neutron-server[30503]: ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line

[Yahoo-eng-team] [Bug 1718287] [NEW] systemd mount targets fail due to device busy or already mounted

2017-09-19 Thread Billy Olsen
Public bug reported:

[Issue]

After rebooting a 16.04 AWS instance (ami-1d4e7a66) with several
external disks attached, formatted, and added to /etc/fstab - systemd
mount targets fail to mount with:

● media-v.mount - /media/v
   Loaded: loaded (/etc/fstab; bad; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2017-09-19 20:12:18 UTC; 1min 
54s ago
Where: /media/v
 What: /dev/xvdv
 Docs: man:fstab(5)
   man:systemd-fstab-generator(8)
  Process: 1196 ExecMount=/bin/mount /dev/xvdv /media/v -t ext4 -o defaults 
(code=exited, status=32)

Sep 19 20:12:17 ip-172-31-7-167 systemd[1]: Mounting /media/v...
Sep 19 20:12:17 ip-172-31-7-167 mount[1196]: mount: /dev/xvdv is already 
mounted or /media/v busy
Sep 19 20:12:18 ip-172-31-7-167 systemd[1]: media-v.mount: Mount process 
exited, code=exited status=32
Sep 19 20:12:18 ip-172-31-7-167 systemd[1]: Failed to mount /media/v.
Sep 19 20:12:18 ip-172-31-7-167 systemd[1]: media-v.mount: Unit entered failed 
state.


>From the cloud-init logs, it appears that the the OVF datasource is mounting 
>the device to find data:

2017-09-19 20:12:17,502 - util.py[DEBUG]: Peeking at /dev/xvdv (max_bytes=512)
2017-09-19 20:12:17,502 - util.py[DEBUG]: Reading from /proc/mounts 
(quiet=False)
2017-09-19 20:12:17,502 - util.py[DEBUG]: Read 2570 bytes from /proc/mounts
...
2017-09-19 20:12:17,506 - util.py[DEBUG]: Running command ['mount', '-o', 
'ro,sync', '-t', 'iso9660', '/dev/xvdv', '/tmp/tmpw2tyqqid'] with allowed 
return codes [0] (shell=False, capture=True)
2017-09-19 20:12:17,545 - util.py[DEBUG]: Failed mount of '/dev/xvdv' as 
'iso9660': Unexpected error while running command.
Command: ['mount', '-o', 'ro,sync', '-t', 'iso9660', '/dev/xvdv', 
'/tmp/tmpw2tyqqid']
Exit code: 32
Reason: -
Stdout: -
Stderr: mount: wrong fs type, bad option, bad superblock on /dev/xvdv,
   missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.
2017-09-19 20:12:17,545 - util.py[DEBUG]: Recursively deleting /tmp/tmpw2tyqqid
2017-09-19 20:12:17,545 - DataSourceOVF.py[DEBUG]: /dev/xvdv not mountable as 
iso9660


[Vitals]

Version: 0.7.9-153-g16a7302f-0ubuntu1~16.04.2
OS: Ubuntu 16.04
Provider: AWS - ami-1d4e7a66

[Recreate]

To recreate this

1. Launch an AWS instance using AMI ami-1d4e7a66 and attach several
disks (I used 25 additional disks)

2. Format and mount all 25:
   mkdir /media/{b..z}
   for i in {b..z}; do 
   mkfs -t ext4 /dev/xvd$i
   mount /dev/xvd$i /media/$i
   echo "/dev/xvd$i /media/$i ext4 defaults,nofail 0 2" >> /etc/fstab
   done

3. reboot instance

Since this is a race, multiple may be necessary. A reproducer script is
attached.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: sts

** Attachment added: "cloud-init.tar"
   
https://bugs.launchpad.net/bugs/1718287/+attachment/4953081/+files/cloud-init.tar

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1718287

Title:
  systemd mount targets fail due to device busy or already mounted

Status in cloud-init:
  New

Bug description:
  [Issue]

  After rebooting a 16.04 AWS instance (ami-1d4e7a66) with several
  external disks attached, formatted, and added to /etc/fstab - systemd
  mount targets fail to mount with:

  ● media-v.mount - /media/v
 Loaded: loaded (/etc/fstab; bad; vendor preset: enabled)
 Active: failed (Result: exit-code) since Tue 2017-09-19 20:12:18 UTC; 1min 
54s ago
  Where: /media/v
   What: /dev/xvdv
   Docs: man:fstab(5)
 man:systemd-fstab-generator(8)
Process: 1196 ExecMount=/bin/mount /dev/xvdv /media/v -t ext4 -o defaults 
(code=exited, status=32)

  Sep 19 20:12:17 ip-172-31-7-167 systemd[1]: Mounting /media/v...
  Sep 19 20:12:17 ip-172-31-7-167 mount[1196]: mount: /dev/xvdv is already 
mounted or /media/v busy
  Sep 19 20:12:18 ip-172-31-7-167 systemd[1]: media-v.mount: Mount process 
exited, code=exited status=32
  Sep 19 20:12:18 ip-172-31-7-167 systemd[1]: Failed to mount /media/v.
  Sep 19 20:12:18 ip-172-31-7-167 systemd[1]: media-v.mount: Unit entered 
failed state.

  
  From the cloud-init logs, it appears that the the OVF datasource is mounting 
the device to find data:

  2017-09-19 20:12:17,502 - util.py[DEBUG]: Peeking at /dev/xvdv (max_bytes=512)
  2017-09-19 20:12:17,502 - util.py[DEBUG]: Reading from /proc/mounts 
(quiet=False)
  2017-09-19 20:12:17,502 - util.py[DEBUG]: Read 2570 bytes from /proc/mounts
  ...
  2017-09-19 20:12:17,506 - util.py[DEBUG]: Running command ['mount', '-o', 
'ro,sync', '-t', 'iso9660', '/dev/xvdv', '/tmp/tmpw2tyqqid'] with allowed 
return codes [0] (shell=False, capture=True)
  2017-09-19 20:12:17,545 - util.py[DEBUG]: Failed mount of '/dev/xvdv' as 
'iso9660': Unexpected error while running command.
  Command: [

[Yahoo-eng-team] [Bug 1718295] [NEW] Unexpected exception in API method: MigrationError_Remote: Migration error: Disk info file is invalid: qemu-img failed to execute - Failed to get shared "write" lo

2017-09-19 Thread Matt Riedemann
Public bug reported:

Looks like this is pretty new:

http://logs.openstack.org/01/503601/7/check/gate-tempest-dsvm-multinode-
live-migration-ubuntu-
xenial/b19b77c/logs/screen-n-api.txt.gz?level=TRACE#_Sep_19_17_47_11_508623

Sep 19 17:47:11.508623 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: ERROR nova.api.openstack.extensions [None 
req-e31fde7b-317f-4db9-b225-10b6e11b2dff tempest-LiveMigrationTest-1678596498 
tempest-LiveMigrationTest-1678596498] Unexpected exception in API method: 
MigrationError_Remote: Migration error: Disk info file is invalid: qemu-img 
failed to execute on 
/opt/stack/data/nova/instances/806812af-bf9e-4dc1-8e0c-11603ccd9f62/disk : 
Unexpected error while running command.
Sep 19 17:47:11.508805 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Command: /usr/bin/python2 -m 
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C 
qemu-img info 
/opt/stack/data/nova/instances/806812af-bf9e-4dc1-8e0c-11603ccd9f62/disk
Sep 19 17:47:11.508946 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Exit code: 1
Sep 19 17:47:11.509079 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Stdout: u''
Sep 19 17:47:11.509233 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Stderr: u'qemu-img: Could not open 
\'/opt/stack/data/nova/instances/806812af-bf9e-4dc1-8e0c-11603ccd9f62/disk\': 
Failed to get shared "write" lock\nIs another process using the image?\n'
Sep 19 17:47:11.509487 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Traceback (most recent call last):
Sep 19 17:47:11.509649 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
Sep 19 17:47:11.509789 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: res = self.dispatcher.dispatch(message)
Sep 19 17:47:11.510231 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
Sep 19 17:47:11.510418 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: return self._do_dispatch(endpoint, method, 
ctxt, args)
Sep 19 17:47:11.510555 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
Sep 19 17:47:11.510687 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: result = func(ctxt, **new_args)
Sep 19 17:47:11.510829 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 76, in wrapped
Sep 19 17:47:11.510959 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: function_name, call_dict, binary)
Sep 19 17:47:11.511194 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Sep 19 17:47:11.511713 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: self.force_reraise()
Sep 19 17:47:11.511852 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Sep 19 17:47:11.512037 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: six.reraise(self.type_, self.value, self.tb)
Sep 19 17:47:11.512687 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 67, in wrapped
Sep 19 17:47:11.516811 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: return f(self, context, *args, **kw)
Sep 19 17:47:11.516966 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 876, in decorated_function
Sep 19 17:47:11.517110 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: return function(self, context, *args, 
**kwargs)
Sep 19 17:47:11.525392 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 217, in decorated_function
Sep 19 17:47:11.525526 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: kwargs['instance'], e, sys.exc_info())
Sep 19 17:47:11.525654 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Sep 19 17:47:11.525783 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: self.force_reraise()
Sep 19 17:47:11.525909 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/

[Yahoo-eng-team] [Bug 1718306] [NEW] RuntimeError: resource_cleanup for VPNaaSTestJSON did not call the super's resource_cleanup

2017-09-19 Thread YAMAMOTO Takashi
Public bug reported:

eg. http://logs.openstack.org/34/505234/1/check/gate-tempest-dsvm-
networking-midonet-v2-full-ubuntu-xenial-
nv/c32fd6e/logs/testr_results.html.gz

ft1.1: tearDownClass 
(neutron_vpnaas.tests.tempest.api.test_vpnaas.VPNaaSTestJSON)_StringException: 
Traceback (most recent call last):
  File "tempest/test.py", line 223, in tearDownClass
six.reraise(etype, value, trace)
  File "tempest/test.py", line 200, in tearDownClass
"super's resource_cleanup" % cls.__name__)
RuntimeError: resource_cleanup for VPNaaSTestJSON did not call the super's 
resource_cleanup

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: tempest vpnaas

** Tags added: tempest vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718306

Title:
  RuntimeError: resource_cleanup for VPNaaSTestJSON did not call the
  super's resource_cleanup

Status in neutron:
  New

Bug description:
  eg. http://logs.openstack.org/34/505234/1/check/gate-tempest-dsvm-
  networking-midonet-v2-full-ubuntu-xenial-
  nv/c32fd6e/logs/testr_results.html.gz

  ft1.1: tearDownClass 
(neutron_vpnaas.tests.tempest.api.test_vpnaas.VPNaaSTestJSON)_StringException: 
Traceback (most recent call last):
File "tempest/test.py", line 223, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 200, in tearDownClass
  "super's resource_cleanup" % cls.__name__)
  RuntimeError: resource_cleanup for VPNaaSTestJSON did not call the super's 
resource_cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718320] [NEW] Do not defer allocation if fixed-ips is in the port create request.

2017-09-19 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/499954
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 630b2d1a5e8b8c6926a590eb87fedde12624c3da
Author: Harald Jensas 
Date:   Sun Jun 4 23:41:19 2017 +0200

Do not defer allocation if fixed-ips is in the port create request.

Fix a usage regression, use case #2 in Nova Neutron Routed Networks spec

https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/neutron-routed-networks.html

Currently ip allocation is always deferred if binding:host_id is not
specified when routed networks are used. This causes initial fixed-ips
data provided by the user to be _lost_ unless the user also specify the
host.

Since the user specified the IP or Subnet to allocate in, there is no
reason to defer the allocation.

a) It is a common pattern, especially in Heat templates to:
 1. Create a port with fixed-ips specifying a subnet.
 2. Create a server and associate the existing port.
b) It is also common to use Neutron IPAM as a source to get VIP
   addresses for clusters on provider networks.

This change enables these use cases with routed networks.

DocImpact: "The Networking service defers assignment of IP addresses to
the port until the particular compute node becomes apparent." This is no
longer true if fixed-ips is used in the port create request.

Change-Id: I86d4aafa1f8cd425cb1eeecfeaf95226d6d248b4
Closes-Bug: #1695740
(cherry picked from commit 33e48cf84db0f6b51a0f981499bbd341860b6c1c)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718320

Title:
  Do not defer allocation if fixed-ips is in the port create
  request.

Status in neutron:
  New

Bug description:
  https://review.openstack.org/499954
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 630b2d1a5e8b8c6926a590eb87fedde12624c3da
  Author: Harald Jensas 
  Date:   Sun Jun 4 23:41:19 2017 +0200

  Do not defer allocation if fixed-ips is in the port create request.
  
  Fix a usage regression, use case #2 in Nova Neutron Routed Networks spec
  
https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/neutron-routed-networks.html
  
  Currently ip allocation is always deferred if binding:host_id is not
  specified when routed networks are used. This causes initial fixed-ips
  data provided by the user to be _lost_ unless the user also specify the
  host.
  
  Since the user specified the IP or Subnet to allocate in, there is no
  reason to defer the allocation.
  
  a) It is a common pattern, especially in Heat templates to:
   1. Create a port with fixed-ips specifying a subnet.
   2. Create a server and associate the existing port.
  b) It is also common to use Neutron IPAM as a source to get VIP
 addresses for clusters on provider networks.
  
  This change enables these use cases with routed networks.
  
  DocImpact: "The Networking service defers assignment of IP addresses to
  the port until the particular compute node becomes apparent." This is no
  longer true if fixed-ips is used in the port create request.
  
  Change-Id: I86d4aafa1f8cd425cb1eeecfeaf95226d6d248b4
  Closes-Bug: #1695740
  (cherry picked from commit 33e48cf84db0f6b51a0f981499bbd341860b6c1c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718295] Re: Unexpected exception in API method: MigrationError_Remote: Migration error: Disk info file is invalid: qemu-img failed to execute - Failed to get shared "write" lock

2017-09-19 Thread Matt Riedemann
It's probably this change to devstack to use the pike cloud archive:

https://review.openstack.org/#/c/501224/

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718295

Title:
  Unexpected exception in API method: MigrationError_Remote: Migration
  error: Disk info file is invalid: qemu-img failed to execute - Failed
  to get shared "write" lock\nIs another process using the image?

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Looks like this is pretty new:

  http://logs.openstack.org/01/503601/7/check/gate-tempest-dsvm-
  multinode-live-migration-ubuntu-
  xenial/b19b77c/logs/screen-n-api.txt.gz?level=TRACE#_Sep_19_17_47_11_508623

  Sep 19 17:47:11.508623 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: ERROR nova.api.openstack.extensions [None 
req-e31fde7b-317f-4db9-b225-10b6e11b2dff tempest-LiveMigrationTest-1678596498 
tempest-LiveMigrationTest-1678596498] Unexpected exception in API method: 
MigrationError_Remote: Migration error: Disk info file is invalid: qemu-img 
failed to execute on 
/opt/stack/data/nova/instances/806812af-bf9e-4dc1-8e0c-11603ccd9f62/disk : 
Unexpected error while running command.
  Sep 19 17:47:11.508805 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Command: /usr/bin/python2 -m 
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C 
qemu-img info 
/opt/stack/data/nova/instances/806812af-bf9e-4dc1-8e0c-11603ccd9f62/disk
  Sep 19 17:47:11.508946 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Exit code: 1
  Sep 19 17:47:11.509079 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Stdout: u''
  Sep 19 17:47:11.509233 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Stderr: u'qemu-img: Could not open 
\'/opt/stack/data/nova/instances/806812af-bf9e-4dc1-8e0c-11603ccd9f62/disk\': 
Failed to get shared "write" lock\nIs another process using the image?\n'
  Sep 19 17:47:11.509487 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: Traceback (most recent call last):
  Sep 19 17:47:11.509649 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
  Sep 19 17:47:11.509789 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: res = self.dispatcher.dispatch(message)
  Sep 19 17:47:11.510231 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
  Sep 19 17:47:11.510418 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: return self._do_dispatch(endpoint, method, 
ctxt, args)
  Sep 19 17:47:11.510555 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
  Sep 19 17:47:11.510687 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: result = func(ctxt, **new_args)
  Sep 19 17:47:11.510829 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 76, in wrapped
  Sep 19 17:47:11.510959 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: function_name, call_dict, binary)
  Sep 19 17:47:11.511194 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Sep 19 17:47:11.511713 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: self.force_reraise()
  Sep 19 17:47:11.511852 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Sep 19 17:47:11.512037 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: six.reraise(self.type_, self.value, self.tb)
  Sep 19 17:47:11.512687 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 67, in wrapped
  Sep 19 17:47:11.516811 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: return f(self, context, *args, **kw)
  Sep 19 17:47:11.516966 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]:   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 876, in decorated_function
  Sep 19 17:47:11.517110 ubuntu-xenial-2-node-rax-ord-10997038 
devstack@n-api.service[28339]: return function(self, context, *args, 
**kwarg

[Yahoo-eng-team] [Bug 1718325] [NEW] {'cell_id': 1} is an invalid UUID in functional test

2017-09-19 Thread jichenjc
Public bug reported:

functional test has following warning, it turn out the assumption has
some error on mocks

/home/jichen/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:365:
 FutureWarning: attr instance_uuid {'cell_id': 1} is an invalid UUID. Using 
UUIDFields with invalid UUIDs is no longer supported, and will be removed in a 
future release. Please update your code to input valid UUIDs or accept 
ValueErrors for invalid UUIDs. See 
https://docs.openstack.org/oslo.versionedobjects/latest/reference/fields.html#oslo_versionedobjects.fields.UUIDField
 for further details
  "for further details" % (attr, value), FutureWarning)


> /home/jichen/git/nova/nova/tests/fixtures.py(318)_fake_instancemapping_get()
-> uuid = args[-1]
(Pdb) p args
(, 
'fae9049d-4633-4148-9917-bb680dd2d413', {'cell_id': 1})
(Pdb) l
313 'cell_mapping': self._fake_cell_list()[0]}
314
315 def _fake_instancemapping_get(self, *args):
316 import pdb
317 pdb.set_trace()
318  -> uuid = args[-1]
319 cell_id = None

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718325

Title:
  {'cell_id': 1} is an invalid UUID in functional test

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  functional test has following warning, it turn out the assumption has
  some error on mocks

  
/home/jichen/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:365:
 FutureWarning: attr instance_uuid {'cell_id': 1} is an invalid UUID. Using 
UUIDFields with invalid UUIDs is no longer supported, and will be removed in a 
future release. Please update your code to input valid UUIDs or accept 
ValueErrors for invalid UUIDs. See 
https://docs.openstack.org/oslo.versionedobjects/latest/reference/fields.html#oslo_versionedobjects.fields.UUIDField
 for further details
"for further details" % (attr, value), FutureWarning)


  > /home/jichen/git/nova/nova/tests/fixtures.py(318)_fake_instancemapping_get()
  -> uuid = args[-1]
  (Pdb) p args
  (, 
'fae9049d-4633-4148-9917-bb680dd2d413', {'cell_id': 1})
  (Pdb) l
  313 'cell_mapping': self._fake_cell_list()[0]}
  314
  315 def _fake_instancemapping_get(self, *args):
  316 import pdb
  317 pdb.set_trace()
  318  -> uuid = args[-1]
  319 cell_id = None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1718325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718335] [NEW] remove 400 as expected error

2017-09-19 Thread jichenjc
Public bug reported:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/server_metadata.py#L59
indicated 400, 403, 404, 409 as error but actually it won't

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718335

Title:
  remove 400 as expected error

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/server_metadata.py#L59
  indicated 400, 403, 404, 409 as error but actually it won't

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1718335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718345] [NEW] ml2_distributed_port_bindings not cleared after migration from DVR

2017-09-19 Thread venkata anil
Public bug reported:

When a router is migrated from DVR to HA/legacy, router interface
bindings in ml2_distributed_port_bindings table not cleared and still
exist after the migration.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718345

Title:
  ml2_distributed_port_bindings not cleared after migration from DVR

Status in neutron:
  New

Bug description:
  When a router is migrated from DVR to HA/legacy, router interface
  bindings in ml2_distributed_port_bindings table not cleared and still
  exist after the migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp