[Yahoo-eng-team] [Bug 1723472] Re: [DVR] Lowering the MTU breaks FIP traffic

2017-10-18 Thread Daniel Alvarez
We have seen that the MAC address of the FIP changes to the qf interface of a 
different controller.
However, the environment was running openstack-neutron-11.0.0-1.el7.noarch.

After upgrading to openstack-neutron-11.0.1-1.el7.noarch, this bug no longer 
occurs.
Marking it as invalid.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723472

Title:
  [DVR] Lowering the MTU breaks FIP traffic

Status in neutron:
  Invalid

Bug description:
  In a DVR environment, when lowering the MTU of a network, traffic
  going to an instance through a floating IP is broken.

  Description:

  * Ping traffic to a VM through its FIP works.
  * Change the MTU of its network through "neutron net-update  --mtu 
1440".
  * Ping to the same FIP doesn't work.

  After a long debugging session with Anil Venkata, we've found that
  packets reach br-ex and then they hit this OF rule with normal action:

   cookie=0x1f847e4bf0de0aea, duration=70306.532s, table=3,
  n_packets=1579251, n_bytes=796614220, idle_age=0, hard_age=65534,
  priority=1 actions=NORMAL

  
  We would expect this rule to switch the packet to br-int so that it can be 
forwarded to the fip namespace (ie. with dst MAC address set to the floating ip 
gw (owner=network:floatingip_agent_gateway):

  $ sudo ovs-vsctl list interface

  _uuid   : 1f2b6e86-d303-42f4-9467-5dab78fc7199
  admin_state : down
  bfd : {}
  bfd_status  : {}
  cfm_fault   : []
  cfm_fault_status: []
  cfm_flap_count  : []
  cfm_health  : []
  cfm_mpid: []
  cfm_remote_mpids: []
  cfm_remote_opstate  : []
  duplex  : []
  error   : []
  external_ids: {attached-mac="fa:16:3e:9d:0c:4f", 
iface-id="8ec34826-b1a6-48ce-9c39-2fd3e8167eb4", iface-status=active}
  name: "fg-8ec34826-b1"


  [heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-appctl fdb/show br-ex


   port  VLAN  MACAge
   [...]
  710  fa:16:3e:9d:0c:4f0

  
  $ sudo ovs-ofctl show br-ex | grep "7("
   7(phy-br-ex): addr:36:63:93:fc:af:e2

  
  And from there, to the fip namespace which would route the packet to the 
qrouter namespace, etc.

  However, when we change the MTU through the following command:

  "neutron net-update  --mtu 1440"

  We see that, after a few seconds, the MAC address of the FIP changes
  so when traffic arrives br-ex and NORMAL action is performed, it will
  not be output to br-int through the patch-port but instead, through
  eth1 and traffic won't work anymore.

  [heat-admin@overcloud-novacompute-0 ~]$ arp -n | grep ".113"
  10.0.0.113   ether   fa:16:3e:9d:0c:4f   C 
vlan10

  neutron port-set x --mtu 1440

  $ arp -n | grep ".113"
  10.0.0.113   ether   fa:16:3e:20:f9:85   C 
vlan10

  
  When setting the MAC address manually, ping starts working again:

  $ arp -s 10.0.0.113 fa:16:3e:9d:0c:4f
  $ ping 10.0.0.113
  PING 10.0.0.113 (10.0.0.113) 56(84) bytes of data.
  64 bytes from 10.0.0.113: icmp_seq=1 ttl=62 time=1.17 ms
  64 bytes from 10.0.0.113: icmp_seq=2 ttl=62 time=0.561 ms

  
  Additional notes:

  When we set the MAC address manually and traffic gets working back
  again, lowering the MTU doesn't change the MAC address (we can't see
  any gARP's coming through).

  When we delete the ARP entry for the FIP and try to ping the FIP, the
  wrong MAC address is set.

  [heat-admin@overcloud-novacompute-0 ~]$ sudo arp -d 10.0.0.113

  [heat-admin@overcloud-novacompute-0 ~]$ ping 10.0.0.113 -c 2
  PING 10.0.0.113 (10.0.0.113) 56(84) bytes of data.

  --- 10.0.0.113 ping statistics ---
  2 packets transmitted, 0 received, 100% packet loss, time 999ms

  [heat-admin@overcloud-novacompute-0 ~]$ arp -n | grep ".113"
  10.0.0.113   ether   fa:16:3e:20:f9:85   C 
vlan10

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1723472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1702635] Re: SR-IOV: sometimes a port may hang in BUILD state

2017-10-18 Thread Ilya Bumarskov
Can't reproduce the bug on a test environment due to lack of appropriate HW. In 
accordance with our policy, fix should be verified on customer side.
Fix is present in snapshots/9.0-2017-10-16-142324

** Changed in: mos
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1702635

Title:
  SR-IOV: sometimes a port may hang in BUILD state

Status in Mirantis OpenStack:
  Fix Released
Status in neutron:
  In Progress

Bug description:
  Scenario:

  1) vfio-pci driver is used for VFs
  2) 2 ports are created in neutron with binding type 'direct'
  3) VMs are spawned and deleted on 2 compute nodes using pre-created ports
  4) one neutron port may be bound to different compute nodes at different
     moments
  5) for some reason (probably a bug, but current bug is not about it)
     vfio-pci is not properly handling VF reset after VM deletion and for
     sriov agent it looks like some port's MAC is still mapped to some PCI
     slot though the port is not bound to the node
  6) sriov agent requests port info from server with
     get_devices_details_list() but doesn't specify 'host' in parameters
  7) in this case neutron server sets this port to BUILD, though it may be
     bound to another host:

  def _get_new_status(self, host, port_context):
  port = port_context.current
  if not host or host == port_context.host:
  new_status = (n_const.PORT_STATUS_BUILD if port['admin_state_up']
    else n_const.PORT_STATUS_DOWN)
  if port['status'] != new_status:
  return new_status

  8) after processing, the agent notifies server with update_device_list() and 
this time specifies 'host' parameter
  9) server detects port's and agent's host mismatch and doesn't update status 
of the port
  10) port stays in BUILD state

  A simple fix would be to specify host at step 6 - in this case neutron
  server won't set port's status to BUILD because of host mismatch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1702635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724514] [NEW] privsep helper command exited non-zero in neutron-fwaas

2017-10-18 Thread Cao Xuan Hoang
Public bug reported:

eg. http://logs.openstack.org/26/512926/2/check/openstack-tox-
py27/3e7d6f7/testr_results.html.gz

ft1.9: 
neutron_fwaas.tests.unit.services.firewall.agents.l3reference.test_firewall_l3_agent.TestFwaasL3AgentRpcCallback.test_get_router_info_list_two_routers_one_without_router_info_StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/base.py", 
line 118, in func
return f(self, *args, **kwargs)
  File 
"neutron_fwaas/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py",
 line 360, in test_get_router_info_list_two_routers_one_without_router_info
rtr_with_ri=True)
  File 
"neutron_fwaas/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py",
 line 351, in _get_router_info_list_router_without_router_info_helper
ri.router['tenant_id'])
  File 
"neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py", line 
137, in _get_router_info_list_for_tenant
self.agent_api.is_router_in_namespace(ri.router_id)]
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/l3/l3_agent_extension_api.py",
 line 64, in is_router_in_namespace
local_namespaces = self._local_namespaces()
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/l3/l3_agent_extension_api.py",
 line 33, in _local_namespaces
local_ns_list = ip_lib.list_network_namespaces()
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/linux/ip_lib.py",
 line 1038, in list_network_namespaces
return privileged.list_netns(**kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-fwaas/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 204, in _wrap
self.start()
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-fwaas/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 215, in start
channel = daemon.RootwrapClientChannel(context=self)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-fwaas/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/daemon.py",
 line 327, in __init__
raise FailedToDropPrivileges(msg)
oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited 
non-zero (1)

** Affects: neutron
 Importance: Undecided
 Assignee: Cao Xuan Hoang (hoangcx)
 Status: In Progress


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1724514

Title:
  privsep helper command exited non-zero in neutron-fwaas

Status in neutron:
  In Progress

Bug description:
  eg. http://logs.openstack.org/26/512926/2/check/openstack-tox-
  py27/3e7d6f7/testr_results.html.gz

  ft1.9: 
neutron_fwaas.tests.unit.services.firewall.agents.l3reference.test_firewall_l3_agent.TestFwaasL3AgentRpcCallback.test_get_router_info_list_two_routers_one_without_router_info_StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/base.py", 
line 118, in func
  return f(self, *args, **kwargs)
File 
"neutron_fwaas/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py",
 line 360, in test_get_router_info_list_two_routers_one_without_router_info
  rtr_with_ri=True)
File 
"neutron_fwaas/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py",
 line 351, in _get_router_info_list_router_without_router_info_helper
  ri.router['tenant_id'])
File 
"neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py", line 
137, in _get_router_info_list_for_tenant
  self.agent_api.is_router_in_namespace(ri.router_id)]
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/l3/l3_agent_extension_api.py",
 line 64, in is_router_in_namespace
  local_namespaces = self._local_namespaces()
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/l3/l3_agent_extension_api.py",
 line 33, in _local_namespaces
  local_ns_list = ip_lib.list_network_namespaces()
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/linux/ip_lib.py",
 line 1038, in list_network_namespaces
  return privileged.list_netns(**kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron-fwaas/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 204, in _wrap
  self.start()
File 
"/home/zuul/src/git.openstack.org/openstack/neutron-fwaas/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 215, in start
  channel = daemon.RootwrapClientChannel(context=self)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron-fwaas/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/daemon.py",
 line 327, in __init__
  raise FailedToDropPrivileges(msg)
  oslo_privsep.daemon.Failed

[Yahoo-eng-team] [Bug 1724520] [NEW] nova-consoleauth failed after restart jujud-machine-0

2017-10-18 Thread Miguel Meneses
Public bug reported:

Hi,

On a HA environment the 3x nova-consoleauth failed after restart juju-
agent jujud-machine-0

There are not very useful information on /var/log/nova/nova-consoleauth.log
No ERRORs neither fails

I checked the status of nova-consoleauth services and were the follow:
$ sudo systemctl status nova-consoleauth.service 
http://pastebin.ubuntu.com/25764904/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724520

Title:
  nova-consoleauth failed after restart jujud-machine-0

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  On a HA environment the 3x nova-consoleauth failed after restart juju-
  agent jujud-machine-0

  There are not very useful information on /var/log/nova/nova-consoleauth.log
  No ERRORs neither fails

  I checked the status of nova-consoleauth services and were the follow:
  $ sudo systemctl status nova-consoleauth.service 
http://pastebin.ubuntu.com/25764904/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724524] [NEW] Ironic nodes report CUSTOM_CUSTOM_FOO resource class

2017-10-18 Thread John Garbutt
Public bug reported:

Currently if you set CUSTOM_FOO in ironic, the virt driver now sends
CUSTOM_CUSTOM_FOO to placement.

Really we shouldn't force users to drop the CUSTOM_ inside ironic.

Expected:
User sets CUSTOM_FOO in ironic.
Placement shows CUSTOM_FOO resources.

Actual:
Placement shows CUSTOM_CUSTOM_FOO resources

** Affects: nova
 Importance: High
 Assignee: John Garbutt (johngarbutt)
 Status: In Progress


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724524

Title:
  Ironic nodes report CUSTOM_CUSTOM_FOO resource class

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently if you set CUSTOM_FOO in ironic, the virt driver now sends
  CUSTOM_CUSTOM_FOO to placement.

  Really we shouldn't force users to drop the CUSTOM_ inside ironic.

  Expected:
  User sets CUSTOM_FOO in ironic.
  Placement shows CUSTOM_FOO resources.

  Actual:
  Placement shows CUSTOM_CUSTOM_FOO resources

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1722403] Re: api-ref: valid server status for filtering is wrong in docs

2017-10-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/510696
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e315fcbec9d385c12e21168795afd7f748083fc6
Submitter: Zuul
Branch:master

commit e315fcbec9d385c12e21168795afd7f748083fc6
Author: Matt Riedemann 
Date:   Mon Oct 9 17:05:24 2017 -0400

api-ref: fix server status values in GET /servers docs

The server status values exposed out of the API and used
for filtering when listing instances comes from the values
in nova.api.openstack.common._STATE_MAP. Some of the values
listed in the docs were incorrectly using variable names from
the code, which don't necessarily match the actual value exposed
out of the API.

The compute API server concepts guide actually had this all
correct, so this just updates the API reference.

Change-Id: I30b6f27c6e7fc9365c203b620b311785f8b4b489
Closes-Bug: #1722403


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1722403

Title:
  api-ref: valid server status for filtering is wrong in docs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The API reference lists the valid states for filtering instances by
  status:

  https://developer.openstack.org/api-ref/compute/#list-servers

  However, several of the ones listed are wrong, like BUILDING and
  STOPPED. The actual list is generated from the map values in this
  code:

  
https://github.com/openstack/nova/blob/bedb33ef04bf5710657fc46bceb68817dcbf83eb/nova/api/openstack/common.py#L43

  stack@devstack:~$ nova list --status BUILDING
  ERROR (BadRequest): Invalid status value (HTTP 400) (Request-ID: 
req-cf8c4f8e-0854-4b61-a900-5fb43c993af9)
  stack@devstack:~$ nova list --status STOPPED
  ERROR (BadRequest): Invalid status value (HTTP 400) (Request-ID: 
req-b7c72e91-bc9c-49c4-bb7b-d7d97fa8f91c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1722403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715463] Re: binary/name gets confused under upgrades of osapi_compute and metadata when using wsgi files

2017-10-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/501359
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0b4a021e4224981e83dca67e7519458f9939f3cd
Submitter: Zuul
Branch:master

commit 0b4a021e4224981e83dca67e7519458f9939f3cd
Author: Erik Berg 
Date:   Wed Sep 6 18:38:29 2017 +

Fix binary name

Before an upgrade, we have these type of entries in the db.

MariaDB [nova]> SELECT id, host, `binary`, deleted, version FROM services;
++--++-+-+
| id | host | binary | deleted | version |
++--++-+-+
|  5 | r1-n-os-api  | nova-osapi_compute | 0   |  16 |
| 21 | r1-n-m-api   | nova-metadata  | 0   |  16 |

The wsgi files we run basically boil down to something like

  NAME=metadata
  return wsgi_app.init_application(NAME)

In the wsgi_app.py we see this function

  service_ref = objects.Service.get_by_host_and_binary(ctxt, host, name)

Which results in a really big query, which again comes down to

  SELECT host, `binary` FROM services
WHERE host = 'r1-n-m-api' AND `binary` == 'metadata'

No results. service_ref is set to None. Carry on.

  if service_ref:
#Nope.
  else:
try:
  ...
  service_obj.host = host
  service_obj.binary = 'nova-%s' % name
  service_obj.create()

Which results in a INSERT statement something like this;

  INSERT INTO services(host, `binary`, report_count, disabled, deleted, 
version)
VALUES ('r1-n-m-api', 'nova-metadata', 0, 0, 0, 22)

  ERROR 1062 (23000): Duplicate entry 'r1-n-m-api-nova-metadata-0' for key 
'uniq_services0host0binary0deleted'

So the first suggested fix is to prepend 'nova-' to the name, and make both
queries ask for 'nova-metadata'.  There's also a check that it doesn't start
with 'nova-', incase someone decides to prepend 'nova-' to the NAME= in the
wsgi-file. Which migth be a litte overkill, but just a safeguard none the 
less.

Change-Id: I58cf9a0115a98c78e5d2fb57c41c13ba6fac0fad
Closes-bug: 1715463


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715463

Title:
  binary/name gets confused under upgrades of osapi_compute and metadata
  when using wsgi files

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Before an upgrade, we have these type of entries in the db.

  MariaDB [nova]> SELECT id, host, `binary`, deleted, version FROM services;
  ++--++-+-+
  | id | host | binary | deleted | version |
  ++--++-+-+
  |  5 | r1-n-os-api  | nova-osapi_compute | 0   |  16 |
  | 21 | r1-n-m-api   | nova-metadata  | 0   |  16 |

  The wsgi files we run basically boil down to something like

NAME=metadata
return wsgi_app.init_application(NAME)

  In the wsgi_app.py we see this function

service_ref = objects.Service.get_by_host_and_binary(ctxt, host,
  name)

  Which results in a really big query, which again comes down to

SELECT host, `binary` FROM services
  WHERE host = 'r1-n-m-api' AND `binary` == 'metadata'

  No results. service_ref is set to None. Carry on.

if service_ref:
  #Nope.
else:
  try:
...
service_obj.host = host
service_obj.binary = 'nova-%s' % name
service_obj.create()

  Which results in a INSERT statement something like this;

INSERT INTO services(host, `binary`, report_count, disabled, deleted, 
version)
  VALUES ('r1-n-m-api', 'nova-metadata', 0, 0, 0, 22)

ERROR 1062 (23000): Duplicate entry 'r1-n-m-api-nova-metadata-0' for
  key 'uniq_services0host0binary0deleted'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724573] [NEW] When using resume_guests_state_on_host_boot encrypted volumes are directly attached to instances after a host reboot

2017-10-18 Thread Lee Yarwood
Public bug reported:

Description
===

When using resume_guests_state_on_host_boot encrypted volumes are
directly attached to instances after a host reboot. These volumes should
be decrypted by the os-brick encryptors that provide libvirt with
decrypted dm devices for use by the instance/domain.

This is due to the following encryptor.attach_volume call being skipped
in the _hard_reboot where reboot=True as it is assumed the dm devices
are already present on the host:

5204 def _create_domain_and_network(self, context, xml, instance, 
network_info,  
5205block_device_info=None, 
 
5206power_on=True, reboot=False,
 
5207vifs_already_plugged=False, 
 
5208post_xml_callback=None, 
 
5209destroy_disks_on_failure=False):
 
[..] 
5218 if (not reboot and 'data' in connection_info and   
 
5219 'volume_id' in connection_info['data']):   
 
5220 volume_id = connection_info['data']['volume_id']   
 
5221 encryption = encryptors.get_encryption_metadata(   
 
5222 context, self._volume_api, volume_id, connection_info) 
 
5223
 
5224 if encryption: 
 
5225 encryptor = 
self._get_volume_encryptor(connection_info, 
5226encryption) 
 
5227 encryptor.attach_volume(context, **encryption) 

Steps to reproduce
==

- Ensure resume_guests_state_on_host_boot is set to True within the
n-cpu config:

$ grep resume_guests_state_on_host_boot /etc/nova/nova-cpu.conf 
resume_guests_state_on_host_boot = True

- Create an instance with an attached encrypted volume:

$ cinder type-create LUKS
$ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512   
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
$ cinder create --display-name 'encrypted volume' --volume-type LUKS 1
$ nova boot --image cirros-0.3.5-x86_64-disk --flavor 1 test
$ nova volume-attach c762ef8d-13ab-4aee-bd20-c6a002bdd172 
3f2cfdf2-11d7-4ac7-883a-76217136f751

- Before continuing note that the instance is connected to the decrypted
dm device:

$ sudo virsh domblklist c762ef8d-13ab-4aee-bd20-c6a002bdd172
Target Source

vda
/opt/stack/data/nova/instances/c762ef8d-13ab-4aee-bd20-c6a002bdd172/disk
vdb/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6

$ ll /dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6
lrwxrwxrwx. 1 root root 56 Oct 18 08:28 
/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6 -> 
/dev/mapper/crypt-scsi-360014054c6bbc8645494397ad372e0e6

- Restart the n-cpu host _or_ fake a host reset by stopping the n-cpu
service, destroying the domain, removing the decrypted dm device,
unlinking the volume path before finally restarting n-cpu:

$ sudo systemctl stop devstack@n-cpu
$ sudo virsh destroy c762ef8d-13ab-4aee-bd20-c6a002bdd172
$ sudo cryptsetup luksClose 
/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6
$ sudo unlink /dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6
$ sudo systemctl start devstack@n-cpu

- The instance is restarted but now points at the original encrypted
block device:

$ sudo virsh domblklist c762ef8d-13ab-4aee-bd20-c6a002bdd172
Target Source

vda
/opt/stack/data/nova/instances/c762ef8d-13ab-4aee-bd20-c6a002bdd172/disk
vdb/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6

$ ll /dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6
lrwxrwxrwx. 1 root root 9 Oct 18 08:32 
/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6 -> ../../sde

- Additional stop and start requests will not correct this:

$ nova stop c762ef8d-13ab-4aee-bd20-c6a002bdd172
$ nova start c762ef8d-13ab-4aee-bd20-c6a002bdd172

$ sudo virsh domblklist c762ef8d-13ab-4aee-bd20-c6a002bdd172
Target Source

vda
/opt/stack/data/nova/instances/c762ef8d-13ab-4aee-bd20-c6a002bdd172/disk
vdb/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6

$ ll /dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6
lrwxrwxrwx. 1 root root 9 Oct 18 08:32 
/dev/disk/by-id/scsi-360014054c6bbc8645494397ad372e0e6 -> ../../sde

Expected result
===
The decrypted volume is attached to the instance once it is restarted.

Actual result
=
The encrypted volume is attached to the instance once it

[Yahoo-eng-team] [Bug 1710141] Re: Continual warnings in n-cpu logs about being unable to delete inventory for an ironic node with an instance on it

2017-10-18 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710141

Title:
  Continual warnings in n-cpu logs about being unable to delete
  inventory for an ironic node with an instance on it

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  Seen here:

  http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

  Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 nova-
  compute[19132]: WARNING nova.scheduler.client.report [None req-
  9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] [req-
  2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory
  'VCPU, MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23
  -ad6f-d86c1f0a0e3f because the inventory is in use.

  As soon as an ironic node has an instance built on it, the node state
  is ACTIVE which means that this method returns True:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L176

  Saying the node is unavailable, because it's wholly consumed I guess.

  That's used here:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L324

  And that's checked here when reporting inventory to the resource
  tracker:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/virt/ironic/driver.py#L741

  Which then tries to delete the inventory for the node resource
  provider in placement, which fails because it's already got an
  instance running on it that is consuming inventory:

  http://logs.openstack.org/54/487954/12/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial-
  nv/041c03a/logs/screen-n-cpu.txt.gz#_Aug_09_19_31_21_450705

  Aug 09 19:31:21.391146 ubuntu-xenial-internap-mtl01-10351013 
nova-compute[19132]: INFO nova.scheduler.client.report [None 
req-9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] Compute node 
38b274b2-2e37-4c23-ad6f-d86c1f0a0e3f reported no inventory but previous 
inventory was detected. Deleting existing inventory records.
  Aug 09 19:31:21.450705 ubuntu-xenial-internap-mtl01-10351013 
nova-compute[19132]: WARNING nova.scheduler.client.report [None 
req-9db22a6d-e88a-42b0-879e-8fe523dcc664 None None] 
[req-2eead243-5e63-4dd0-a208-4ceed95478ff] We cannot delete inventory 'VCPU, 
MEMORY_MB, DISK_GB' for resource provider 38b274b2-2e37-4c23-ad6f-d86c1f0a0e3f 
because the inventory is in use.

  This is also bad because if the node was updated with a
  resource_class, that resource class won't be automatically created in
  Placement here:

  
https://github.com/openstack/nova/blob/c2d33c3271370358d48553233b41bf9119d834fb/nova/scheduler/client/report.py#L789

  Because the driver didn't report it in the get_inventory method.

  And that has an impact on this code to migrate
  instance.flavor.extra_specs to have custom resource class overrides
  from ironic nodes that now have a resource_class set:

  https://review.openstack.org/#/c/487954/

  So we've got a bit of a chicken and egg problem here.

  Manually testing the ironic flavor migration code hits this problem,
  as seen here:

  http://paste.openstack.org/show/618160/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1710141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724589] [NEW] Unable to transition to Ironic Node Resource Classes in Pike

2017-10-18 Thread John Garbutt
Public bug reported:

In Pike we ask people to:

* Update Ironic Node with a Resource Class
* Update flavors to request the new Resource Class (and not request VCPU, RAM, 
DISK), using the docs: 
https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes

Consider this case:

* some old instances are running from before the updates
* some new instances are created after the updates

In placement:

* all inventory is correct, new resource class and legacy resource classes are 
both present
* old instance allocations: only request

In nova db:

* old instances and new instances correctly request the new resource class in 
their flavor
* new instances also include the anti-request for VCPU, DISK and RAM

Now this is the flow that shows the problem:

* get list of candidate allocations
* this includes nodes that already have instances on (they only claim part of 
the inventory, but the new instance is only requesting the bit of the inventory 
the old instance isn't using)
* boom, scheduling new instances fails after you hit the retry count, unless 
you got lucky and found a free slot by accident

Possible reason for this:

* Pike no longer updated instance allocations, if we updated the
allocations of old instances to request the new custom resource class
allocations, we would fix the above issue.

Possible work around:

* in the new flavor, keep requesting VCPU, RAM and CPU resources for
pike, fix that up in queens?

** Affects: nova
 Importance: High
 Status: New


** Tags: ironic placement

** Tags added: ironic placement

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724589

Title:
  Unable to transition to Ironic Node Resource Classes in Pike

Status in OpenStack Compute (nova):
  New

Bug description:
  In Pike we ask people to:

  * Update Ironic Node with a Resource Class
  * Update flavors to request the new Resource Class (and not request VCPU, 
RAM, DISK), using the docs: 
https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes

  Consider this case:

  * some old instances are running from before the updates
  * some new instances are created after the updates

  In placement:

  * all inventory is correct, new resource class and legacy resource classes 
are both present
  * old instance allocations: only request

  In nova db:

  * old instances and new instances correctly request the new resource class in 
their flavor
  * new instances also include the anti-request for VCPU, DISK and RAM

  Now this is the flow that shows the problem:

  * get list of candidate allocations
  * this includes nodes that already have instances on (they only claim part of 
the inventory, but the new instance is only requesting the bit of the inventory 
the old instance isn't using)
  * boom, scheduling new instances fails after you hit the retry count, unless 
you got lucky and found a free slot by accident

  Possible reason for this:

  * Pike no longer updated instance allocations, if we updated the
  allocations of old instances to request the new custom resource class
  allocations, we would fix the above issue.

  Possible work around:

  * in the new flavor, keep requesting VCPU, RAM and CPU resources for
  pike, fix that up in queens?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458498] Re: Authenticated URLs not accepted when Launching stacks

2017-10-18 Thread Gary W. Smith
Per RFC 7230 (section A.2), username and password are disallowed in
http/https URIs due to security issues.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458498

Title:
  Authenticated URLs not accepted when Launching stacks

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  When trying to launch a new heat stack from horizon using URL input, the 
input validation seemingly only accepts a standard URL (e.g. 
https://domain.com/path/to/template.yaml).
  However, if a URL contains login credentials (e.g. 
https://user:passw...@domain.com/path/to/template.yaml), the input validation 
throws "Enter a valid URL". The URL is valid and can be curl'd etc, and while 
passing credentials like that may not be the safest, in an isolated network it 
is sometimes done.
  Horizon shouldn't prevent these types of URLs being entered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381749] Re: Can not create usable image in glance for vmdk images

2017-10-18 Thread Gary W. Smith
In pike (and earlier) these options are available on the Metadata tab on
the Create Image dialog.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381749

Title:
  Can not create usable image in glance for vmdk images

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  This is because the image needs to have two options: --property 
vmware_disktype="sparse" --property vmware_adaptertype="ide"
  Horizon does not provide the capability set those options.

  However, it can be created using glance CLI:
  glance image-create --name x --is-public=True --container-format=bare 
--disk-format=vmdk --property --image-location 
http://172.19.11.252/SCP_VM_images/fedora-amd64.vmdk

  The UI provided by https://blueprints.launchpad.net/horizon/+spec
  /manage-image-custom-properties helps, but the scenario is still
  perceived to be complex. A behavior is expected which provides the
  user a UI to choose vmware properties, eg: vmware_disktype="sparse"
  --property vmware_adaptertype="ide", when create image with vmdk
  format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579982] Re: Go to admin info error

2017-10-18 Thread Gary W. Smith
As indicated above, the python-cinderclient was updated.  It does not
appear to have been a horizon problem.

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1579982

Title:
  Go to admin info error

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in python-cinderclient:
  Fix Released

Bug description:
  I used openstack version is M ,when I go to /admin/info/ path will
  show info 'TemplateSyntaxError at /admin/info/'

  Browser show info:

  TemplateSyntaxError at /admin/info/
  service
  Request Method:   GET
  Request URL:  http://192.168.22.1:/admin/info/
  Django Version:   1.8.7
  Exception Type:   TemplateSyntaxError
  Exception Value:  
  service
  Exception Location:   
/usr/lib/python2.7/site-packages/cinderclient/openstack/common/apiclient/base.py
 in __getattr__, line 505
  Python Executable:/usr/bin/python2
  Python Version:   2.7.5
  Python Path:  
  ['/mnt/horizon_new',
   '/usr/lib64/python27.zip',
   '/usr/lib64/python2.7',
   '/usr/lib64/python2.7/plat-linux2',
   '/usr/lib64/python2.7/lib-tk',
   '/usr/lib64/python2.7/lib-old',
   '/usr/lib64/python2.7/lib-dynload',
   '/usr/lib64/python2.7/site-packages',
   '/usr/lib64/python2.7/site-packages/gtk-2.0',
   '/usr/lib/python2.7/site-packages',
   '/mnt/horizon_new/openstack_dashboard']

  Error during template rendering

  
  Console show info:

  Error while rendering table rows.
  Traceback (most recent call last):
File "/mnt/horizon_new/horizon/tables/base.py", line 1781, in get_rows
  row = self._meta.row_class(self, datum)
File "/mnt/horizon_new/horizon/tables/base.py", line 534, in __init__
  self.load_cells()
File "/mnt/horizon_new/horizon/tables/base.py", line 560, in load_cells
  cell = table._meta.cell_class(datum, column, self)
File "/mnt/horizon_new/horizon/tables/base.py", line 666, in __init__
  self.data = self.get_data(datum, column, row)
File "/mnt/horizon_new/horizon/tables/base.py", line 710, in get_data
  data = column.get_data(datum)
File "/mnt/horizon_new/horizon/tables/base.py", line 381, in get_data
  data = self.get_raw_data(datum)
File "/mnt/horizon_new/horizon/tables/base.py", line 363, in get_raw_data
  "%(obj)s.") % {'attr': self.transform, 'obj': datum}
File "/usr/lib/python2.7/site-packages/django/utils/functional.py", line 
178, in __mod__
  return six.text_type(self) % rhs
File "/usr/lib/python2.7/site-packages/cinderclient/v2/services.py", line 
25, in __repr__
  return "" % self.service
File 
"/usr/lib/python2.7/site-packages/cinderclient/openstack/common/apiclient/base.py",
 line 505, in __getattr__
  raise AttributeError(k)
  AttributeError: service
  Internal Server Error: /admin/info/
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
89, in dispatch
  return handler(request, *args, **kwargs)
File "/mnt/horizon_new/horizon/tabs/views.py", line 147, in get
  return self.handle_tabbed_response(context["tab_group"], context)
File "/mnt/horizon_new/horizon/tabs/views.py", line 68, in 
handle_tabbed_response
  return self.render_to_response(context)
File "/mnt/horizon_new/horizon/tabs/views.py", line 81, in 
render_to_response
  response.render()
File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
158, in render
  self.content = self.rendered_content
File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
135, in rendered_content
  content = template.render(context, self._request)
File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
  return self.template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 210, 
in render
  return self._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 202, 
in _render
  return self.nodelist.render(context)

[Yahoo-eng-team] [Bug 1507031] Re: Add and then delete a user, results in unexpected error on the Openstack UI

2017-10-18 Thread Gary W. Smith
I am unable to reproduce this with Pike; there is no redirection or
error message, and the session continues to work properly.

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1507031

Title:
  Add and then delete a user, results in unexpected error on the
  Openstack UI

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Here is how to reproduce it:

  1. Install an Ubuntu Openstack on a VM.

  2. Login to the horizon for that VM.

  3. Add a new user role using the following command using CLI:
  keystone user-role-add --user  --tenant 
 --role < MEMBER_ROLE>

  4. Remove the user role, using the following command:
  keystone user-role-remove --user  --tenant 
  --role < SAME_MEMBER_ROLE>

  5. Refresh the horizon, UI redirects to an error message page, with the 
following error message:
  "Something went wrong! An unexpected error has occurred. Try refreshing the 
page. If that doesn't help, contact your local administrator."

  
  A screenshot of the UI is attached.

  Please note that refreshing the page does not resolve the issue.
  However, either clearing the browser's cookies/history for that
  session or opening the horizon on an "Incognito" mode may resolve the
  issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724444] Re: networking-sfc port pair creation fails in create_port_pair_postcommit on kolla deployment

2017-10-18 Thread Bernard Cafarelli
Thanks, reassigning to networking-sfc for further investigation, as both
conf and database look good

** Project changed: neutron => networking-sfc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/172

Title:
  networking-sfc port pair creation fails in create_port_pair_postcommit
  on kolla deployment

Status in networking-sfc:
  New

Bug description:
  I have followed the blog [1] to configure the neutron-sfc using kolla
  deployment tool. After deployment I am able to successfully create an
  instance without any issue, but while creating neutron SFC port pair
  following [2] I am getting following error:

  ~~~
  [root@controller-1 ~]# openstack server list
  
+--+---++---++-+
  | ID   | Name  | Status | Networks
  | Image  | Flavor  |
  
+--+---++---++-+
  | 1a59573b-b557-488f-917c-536a2fd21f35 | FW| ACTIVE | demo-net=10.0.0.14, 
10.0.0.12 | cirros | m1.tiny |
  | c540bcc3-ce27-46e9-85e3-cc4d124194ae | demo1 | ACTIVE | demo-net=10.0.0.6   
  | cirros | m1.tiny |
  
+--+---++---++-+
  [root@controller-1 ~]#

  [root@controller-1 ~]# openstack port list
  
+--+--+---+---++
  | ID   | Name | MAC Address   | Fixed IP 
Addresses| Status |
  
+--+--+---+---++
  | 0a3e2ac8-4ec7-4238-ad08-36528eed6743 | P0   | fa:16:3e:24:03:87 | 
ip_address='10.0.0.10', subnet_id='08bbc3a6-4fe0-4ca2-bc03-1425a69b53f6'  | 
DOWN   |
  | 2ce17a23-e80c-453e-81a1-28f49043eef5 |  | fa:16:3e:6d:81:d8 | 
ip_address='10.0.0.1', subnet_id='08bbc3a6-4fe0-4ca2-bc03-1425a69b53f6'   | 
DOWN   |
  | 49ba9fd8-5840-4ab3-b9f7-96fc59225b37 | P1   | fa:16:3e:19:35:45 | 
ip_address='10.0.0.14', subnet_id='08bbc3a6-4fe0-4ca2-bc03-1425a69b53f6'  | 
ACTIVE |
  | 8f83b61d-3057-4d1c-a075-340271f845ee | P2   | fa:16:3e:44:20:29 | 
ip_address='10.0.0.12', subnet_id='08bbc3a6-4fe0-4ca2-bc03-1425a69b53f6'  | 
ACTIVE |
  | a20e66c1-9c95-4c45-8008-8eadc5962cce |  | fa:16:3e:e0:40:dc | 
ip_address='10.0.0.6', subnet_id='08bbc3a6-4fe0-4ca2-bc03-1425a69b53f6'   | 
ACTIVE |
  | d0dc19b2-1b63-4c92-a2d2-a2f26af06456 |  | fa:16:3e:37:8d:7f | 
ip_address='10.0.2.155', subnet_id='72bc8dea-2364-4138-a284-80ffa564362f' | 
DOWN   |
  | fcb49641-6b32-4064-a5e1-4aa65243352c |  | fa:16:3e:c3:2f:50 | 
ip_address='10.0.0.2', subnet_id='08bbc3a6-4fe0-4ca2-bc03-1425a69b53f6'   | 
ACTIVE |
  
+--+--+---+---++

  [root@controller-1 ~]# nova list
  
+--+---+++-+---+
  | ID   | Name  | Status | Task State | Power 
State | Networks  |
  
+--+---+++-+---+
  | 1a59573b-b557-488f-917c-536a2fd21f35 | FW| ACTIVE | -  | 
Running | demo-net=10.0.0.14, 10.0.0.12 |
  | c540bcc3-ce27-46e9-85e3-cc4d124194ae | demo1 | ACTIVE | -  | 
Running | demo-net=10.0.0.6 |
  
+--+---+++-+---+
  ~~~

  While creating sfc port pair group.

  ~~~
  [root@controller-1 ~]# openstack sfc port pair create --ingress P1 --egress 
P2 PPAIR
  create_port_pair_postcommit failed.
  Neutron server returns request_ids: 
['req-5e062f5c-2860-422c-964d-af191f6b4c4d']
  ~~~

  
  Following call trace is reported in neutron-server container. 

  ~~~
  2017-10-18 04:13:36.505 23 ERROR networking_sfc.services.sfc.plugin 
[req-5e062f5c-2860-422c-964d-af191f6b4c4d 0efeac25b8a845c799800fa87d850024 
766c772012ff4c4ca6e563accd51f1ea - default default] Cr
  eate port pair failed, deleting port_pair 
'e66b3968-7d8a-4fd7-848b-563fc5a2e248': SfcDriverError: 
create_port_pair_postcommit failed.
  2017-10-18 04:13:36.721 23 ERROR neutron.api.v2.resource 
[req-5e062f5c-2860-422c-964d-af191f6b4c4d 0efeac25b8a845c799800fa87d850024 
766c772012ff4c4ca6e563accd51f1ea - default default] create failed
  : No details.: SfcDriverError: create_port_pair_postcommit failed.
  2017-10-18 04:13:36.721 23 ERROR neutr

[Yahoo-eng-team] [Bug 1724613] [NEW] AllocationCandidates.get_by_filters ignores shared RPs when the RC exists in both places

2017-10-18 Thread Eric Fried
Public bug reported:

When both the compute node resource provider and the shared resource
provider have inventory in the same resource class,
AllocationCandidates.get_by_filters will not return an AllocationRequest
including the shared resource provider.

Example:

 cnrp { VCPU: 24,
MEMORY_MB: 2048,
DISK_GB: 16 }
 ssrp { DISK_GB: 32 }

 AllocationCandidates.get_by_filters(
 resources={ VCPU: 1,
 MEMORY_MB: 512,
 DISK_GB: 2 } )

Expected:

 allocation_requests: [
 { cnrp: { VCPU: 1,
   MEMORY_MB: 512,
   DISK_GB: 2 } },
 { cnrp: { VCPU: 1,
   MEMORY_MB: 512 }
   ssrp: { DISK_GB: 2 } },
 ]

Actual:

 allocation_requests: [
 { cnrp: { VCPU: 1,
   MEMORY_MB: 512,
   DISK_GB: 2 } }
 ]

I will post a review shortly that demonstrates this.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724613

Title:
  AllocationCandidates.get_by_filters ignores shared RPs when the RC
  exists in both places

Status in OpenStack Compute (nova):
  New

Bug description:
  When both the compute node resource provider and the shared resource
  provider have inventory in the same resource class,
  AllocationCandidates.get_by_filters will not return an
  AllocationRequest including the shared resource provider.

  Example:

   cnrp { VCPU: 24,
  MEMORY_MB: 2048,
  DISK_GB: 16 }
   ssrp { DISK_GB: 32 }

   AllocationCandidates.get_by_filters(
   resources={ VCPU: 1,
   MEMORY_MB: 512,
   DISK_GB: 2 } )

  Expected:

   allocation_requests: [
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512,
 DISK_GB: 2 } },
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512 }
 ssrp: { DISK_GB: 2 } },
   ]

  Actual:

   allocation_requests: [
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512,
 DISK_GB: 2 } }
   ]

  I will post a review shortly that demonstrates this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724621] [NEW] nova-manage cell_v2 verify_instance returns a valid instance mapping even after the instance is deleted

2017-10-18 Thread Surya Seetharaman
Public bug reported:

Although nova-manage cell_v2 verify_instance is used to check if the
provided instance is correctly mapped to a cell or not, this should not
be returning a valid mapping message if the instance itself is deleted.
It should return an error message saying 'The instance does not exist'.

Steps to reproduce :

1. Create an instance :

-> nova boot --image 831bb8a0-9305-4cd7-b985-cbdadfb5d3db --flavor m1.nano test
-> nova list
+--++++-+-+
| ID   | Name   | Status | Task State | Power 
State | Networks|
+--++++-+-+
| aec6eb34-6aaf-4883-8285-348d40fdac87 | test   | ACTIVE | -  | Running 
| public=2001:db8::4, 172.24.4.9  |
+--++++-+-+


2. Delete the instance :

-> nova delete test
Request to delete server test has been accepted.
-> nova list
+--++++-+-+
| ID   | Name   | Status | Task State | Power 
State | Networks|
+--++++-+-+
+--++++-+-+


3. Verify Instance :

-> nova-manage cell_v2 verify_instance --uuid 
aec6eb34-6aaf-4883-8285-348d40fdac87
Instance aec6eb34-6aaf-4883-8285-348d40fdac87 is in cell: cell5 
(c5ccba5d-1a45-4739-a5dd-d665a1b19301)

Basically the message that we get is misleading for a deleted instance.
This is because verify_instance queries the instance_mappings table
which maintains a mapping of the deleted instances as well.

** Affects: nova
 Importance: Undecided
 Assignee: Surya Seetharaman (tssurya)
 Status: New


** Tags: cells nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724621

Title:
  nova-manage cell_v2 verify_instance returns a valid instance mapping
  even after the instance is deleted

Status in OpenStack Compute (nova):
  New

Bug description:
  Although nova-manage cell_v2 verify_instance is used to check if the
  provided instance is correctly mapped to a cell or not, this should
  not be returning a valid mapping message if the instance itself is
  deleted. It should return an error message saying 'The instance does
  not exist'.

  Steps to reproduce :

  1. Create an instance :

  -> nova boot --image 831bb8a0-9305-4cd7-b985-cbdadfb5d3db --flavor m1.nano 
test
  -> nova list
  
+--++++-+-+
  | ID   | Name   | Status | Task State | Power 
State | Networks|
  
+--++++-+-+
  | aec6eb34-6aaf-4883-8285-348d40fdac87 | test   | ACTIVE | -  | 
Running | public=2001:db8::4, 172.24.4.9  |
  
+--++++-+-+

  
  2. Delete the instance :

  -> nova delete test
  Request to delete server test has been accepted.
  -> nova list
  
+--++++-+-+
  | ID   | Name   | Status | Task State | Power 
State | Networks|
  
+--++++-+-+
  
+--++++-+-+

  
  3. Verify Instance :

  -> nova-manage cell_v2 verify_instance --uuid 
aec6eb34-6aaf-4883-8285-348d40fdac87
  Instance aec6eb34-6aaf-4883-8285-348d40fdac87 is in cell: cell5 
(c5ccba5d-1a45-4739-a5dd-d665a1b19301)

  Basically the message that we get is misleading for a deleted
  instance. This is because verify_instance queries the
  instance_mappings table which maintains a mapping of the deleted
  instances as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724628] [NEW] nova.virt.libvirt.driver failed to attach volume (RBD backed Cinder)

2017-10-18 Thread Jaime Guzman
Public bug reported:

Hi,

I'm using Ubuntu Openstack Ocata deployed with Ubuntu Landscape. I have
installed three timees having the same problem with fresh installation.
Ceph is used for block and object storage.

When I try to attach a volume to an instance this is not attached. Below
is the nova-compute log running the machine:

2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver 
[req-5153c34c-8086-4a42-ac27-df6cc711a3bb 57646279ee6f40eb98ea3ee233d66875 
d0aac06e67924d879543c2c3a5eb8a73 - - -] [instance: 
17798414-d179-4195-9200-9b92790c5729] Failed to attach volume at mountpoint: 
/dev/vdb
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] Traceback (most recent call last):
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1232, in 
attach_volume
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] guest.attach_device(conf, 
persistent=True, live=live)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 309, in 
attach_device
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] 
self._domain.attachDeviceFlags(device_xml, flags=flags)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] rv = execute(f, *args, **kwargs)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] six.reraise(c, e, tb)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] rv = meth(*args, **kwargs)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 560, in attachDeviceFlags
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729] libvirtError: internal error: unable to 
execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't 
find value 'drive-virtio-disk1'
2017-10-18 16:52:44.159 231733 ERROR nova.virt.libvirt.driver [instance: 
17798414-d179-4195-9200-9b92790c5729]
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device 
[req-5153c34c-8086-4a42-ac27-df6cc711a3bb 57646279ee6f40eb98ea3ee233d66875 
d0aac06e67924d879543c2c3a5eb8a73 - - -] [instance: 
17798414-d179-4195-9200-9b92790c5729] Driver failed to attach volume 
33a23bbe-c997-4447-a497-0109773edcf8 at /dev/vdb
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device [instance: 
17798414-d179-4195-9200-9b92790c5729] Traceback (most recent call last):
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 273, in 
attach
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device [instance: 
17798414-d179-4195-9200-9b92790c5729] device_type=self['device_type'], 
encryption=encryption)
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device [instance: 
17798414-d179-4195-9200-9b92790c5729]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1243, in 
attach_volume
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device [instance: 
17798414-d179-4195-9200-9b92790c5729] 
self._disconnect_volume(connection_info, disk_dev)
2017-10-18 16:52:44.166 231733 ERROR nova.virt.block_device [instance: 
17798414-d179-4195-9200-9b92790c5729]   File

[Yahoo-eng-team] [Bug 1724633] [NEW] AllocationCandidates.get_by_filters hits incorrectly when traits are split across the main RP and aggregates

2017-10-18 Thread Eric Fried
Public bug reported:

When requesting multiple resources with multiple traits, placement
doesn't know that a particular trait needs to be associated with a
particular resource.  As currently conceived, it will return allocation
candidates from the main RP plus shared RPs such that all traits are
satisfied  This is bad, particularly when the main RP and shared RPs
provide inventory from the same resource class.

For example, consider a compute node that has local SSD storage, which
is associated with a shared storage RP with a RAID5 array:

 cnrp { VCPU: 24,
MEMORY_MB: 2048,
DISK_GB: 16,
traits: [HW_CPU_X86_SSE,
 STORAGE_DISK_SSD] }
 ssrp { DISK_GB: 32,
traits: [STORAGE_DISK_RAID5] }

A request for SSD + RAID5 disk should *not* return any results from the
above setup, because there's not actually any disk with both of those
characteristics.

 AllocationCandidates.get_by_filters(
 resources={ VCPU: 1,
 MEMORY_MB: 512,
 DISK_GB: 2 },
 traits= [HW_CPU_X86_SSE,
  STORAGE_DISK_SSD,
  STORAGE_DISK_RAID5])

Expected:

 allocation_requests: []

Actual:

 allocation_requests: [
 { cnrp: { VCPU: 1,
   MEMORY_MB: 512 }
   ssrp: { DISK_GB: 2 } },
 ]

I will post a review shortly with a test case that demonstrates this.
Note, however, that the test will spuriously pass until
https://bugs.launchpad.net/nova/+bug/1724613 is fixed.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724633

Title:
  AllocationCandidates.get_by_filters hits incorrectly when traits are
  split across the main RP and aggregates

Status in OpenStack Compute (nova):
  New

Bug description:
  When requesting multiple resources with multiple traits, placement
  doesn't know that a particular trait needs to be associated with a
  particular resource.  As currently conceived, it will return
  allocation candidates from the main RP plus shared RPs such that all
  traits are satisfied  This is bad, particularly when the main RP and
  shared RPs provide inventory from the same resource class.

  For example, consider a compute node that has local SSD storage, which
  is associated with a shared storage RP with a RAID5 array:

   cnrp { VCPU: 24,
  MEMORY_MB: 2048,
  DISK_GB: 16,
  traits: [HW_CPU_X86_SSE,
   STORAGE_DISK_SSD] }
   ssrp { DISK_GB: 32,
  traits: [STORAGE_DISK_RAID5] }

  A request for SSD + RAID5 disk should *not* return any results from
  the above setup, because there's not actually any disk with both of
  those characteristics.

   AllocationCandidates.get_by_filters(
   resources={ VCPU: 1,
   MEMORY_MB: 512,
   DISK_GB: 2 },
   traits= [HW_CPU_X86_SSE,
STORAGE_DISK_SSD,
STORAGE_DISK_RAID5])

  Expected:

   allocation_requests: []

  Actual:

   allocation_requests: [
   { cnrp: { VCPU: 1,
 MEMORY_MB: 512 }
 ssrp: { DISK_GB: 2 } },
   ]

  I will post a review shortly with a test case that demonstrates this.
  Note, however, that the test will spuriously pass until
  https://bugs.launchpad.net/nova/+bug/1724613 is fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724645] [NEW] remote_id_attribute config options prevents multiple protocol variations for Federation

2017-10-18 Thread Adam Young
Public bug reported:

In order to activate a protocol for Federation, you need SOME value for
remote_id_attribute.  However , this is set once per protocol in the
config file, not in the federated data.  Thus, if two different SAML
implementations both wanted to use different values for
remote_id_attribute (DN vs CN for example) they could not.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1724645

Title:
  remote_id_attribute config options prevents multiple protocol
  variations for Federation

Status in OpenStack Identity (keystone):
  New

Bug description:
  In order to activate a protocol for Federation, you need SOME value
  for remote_id_attribute.  However , this is set once per protocol in
  the config file, not in the federated data.  Thus, if two different
  SAML  implementations both wanted to use different values for
  remote_id_attribute (DN vs CN for example) they could not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1724645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1676327] Re: UnicodeDecodeError 'ascii' codec can't decode byte 0xc5 in position

2017-10-18 Thread Oleksiy Molchanov
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1676327

Title:
  UnicodeDecodeError 'ascii' codec can't decode byte 0xc5 in position

Status in Mirantis OpenStack:
  Confirmed
Status in Mirantis OpenStack 10.0.x series:
  Confirmed
Status in Mirantis OpenStack 9.x series:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  Detailed bug description:
   We can't launch instance with non-ascii symbol in the name or in image 
description field
  Steps to reproduce:
   Enable Debug=True on the nova-compute host, restart nova-compute service.
  Launch instance with name f.e. "iš instance"
  Expected results:
   Instance in active state
  Actual result:
   Instance in ERROR state
  Reproducibility:
   100%
  Workaround:
   Set DEBUG=False

  Description of the environment:
  - Operation system: Ubuntu, RHEL
  - Versions of components: MOS 9.0, 9.1

  Additional information:

  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager 
[req-a427a02d-9563-4f6b-8111-e6b5dc15f830 ec0ffcafc20244029d090e7e98aa0461 
9b05738c81444d5e8ba74669c058f53a - - -] [instance: 25073828-c38f-4ec4-a3a7-5
  e9e092806e4] Instance failed to spawn
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] Traceback (most recent call last):
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2218, in 
_build_resources
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] yield resources
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instan
  ce
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] block_device_info=block_device_info)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2793, in 
spawn
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] write_to_disk=True)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4737, in 
_get_guest_xml
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] context)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4603, in 
_get_guest_config
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] flavor, virt_type, self._host)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 437, in 
get_config
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] 'vif': vif, 'virt_type': virt_type})
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1425, in debug
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] self.logger.debug(msg, *args, 
**kwargs)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1140, in debug
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] self._log(DEBUG, msg, args, **kwargs)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1271, in _log
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] self.handle(record)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1281, in handle
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c38f-4ec4-a3a7-5e9e092806e4] self.callHandlers(record)
  2017-03-27 09:03:26.005 2489897 ERROR nova.compute.manager [instance: 
25073828-c

[Yahoo-eng-team] [Bug 1724589] Re: Unable to transition to Ironic Node Resource Classes in Pike

2017-10-18 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Triaged

** Changed in: nova/pike
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724589

Title:
  Unable to transition to Ironic Node Resource Classes in Pike

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Triaged

Bug description:
  So the scenario is:

  * upgraded to pike
  * have ironic with a multiple flavor
  * attempting to transition to resource class based scheduling, now pike is 
installed

  In Pike we ask people to:

  * Update Ironic Node with a Resource Class
  * Update flavors to request the new Resource Class (and not request VCPU, 
RAM, DISK), using the docs: 
https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes

  Consider this case:

  * some old instances are running from before the updates
  * some new instances are created after the updates

  In placement:

  * all inventory is correct, new resource class and legacy resource classes 
are both present
  * old instance allocations: only request

  In nova db:

  * old instances and new instances correctly request the new resource class in 
their flavor
  * new instances also include the anti-request for VCPU, DISK and RAM

  Now this is the flow that shows the problem:

  * get list of candidate allocations
  * this includes nodes that already have instances on (they only claim part of 
the inventory, but the new instance is only requesting the bit of the inventory 
the old instance isn't using)
  * boom, scheduling new instances fails after you hit the retry count, unless 
you got lucky and found a free slot by accident

  Possible reason for this:

  * Pike no longer updated instance allocations, if we updated the
  allocations of old instances to request the new custom resource class
  allocations, we would fix the above issue.

  Possible work around:

  * in the new flavor, keep requesting VCPU, RAM and CPU resources for
  pike, fix that up in queens?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724354] Re: WARNING in logs due to missing python-jsconschema

2017-10-18 Thread Scott Moser
** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Scott Moser (smoser)

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: Scott Moser (smoser) => (unassigned)

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Chad Smith (chad.smith)

** Changed in: cloud-init (Ubuntu Zesty)
 Assignee: (unassigned) => Chad Smith (chad.smith)

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Fix Committed

** Changed in: cloud-init
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1724354

Title:
  WARNING in logs due to missing python-jsconschema

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Invalid
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed

Bug description:
  $ dpkg-query --show cloud-init
  cloud-init  17.1-18-gd4f70470-0ubuntu1~16.04.1

  $ sudo cat /var/lib/cloud/instance/user-data.txt
  #cloud-config
  bootcmd:
- "cat /proc/uptime > /run/bootcmd-works"
  runcmd:
- "cat /proc/uptime > /run/runcmd-works"

  
  $ grep WARN /var/log/cloud-init.log
  2017-10-17 19:08:10,509 - schema.py[WARNING]: Ignoring schema validation. 
python-jsonschema is not present
  2017-10-17 19:08:10,586 - schema.py[WARNING]: Ignoring schema validation. 
python-jsonschema is not present
  2017-10-17 19:08:14,651 - schema.py[WARNING]: Ignoring schema validation. 
python-jsonschema is not present

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1724354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724685] [NEW] HTTP 404 creating trust with role that you don't have

2017-10-18 Thread Matthew Edmonds
Public bug reported:

keystone returns HTTP 404 if you try to create a trust with a role that
you don't have. This is not an appropriate error code for that case. It
should be HTTP 400 (Bad Request).

Found in Pike

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1724685

Title:
  HTTP 404 creating trust with role that you don't have

Status in OpenStack Identity (keystone):
  New

Bug description:
  keystone returns HTTP 404 if you try to create a trust with a role
  that you don't have. This is not an appropriate error code for that
  case. It should be HTTP 400 (Bad Request).

  Found in Pike

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1724685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724686] [NEW] authentication code hangs when there are three or more admin keystone endpoints

2017-10-18 Thread Chris Friesen
Public bug reported:

I'm running stable/pike devstack, and I was playing around with what
happens when there are many endpoints in multiple regions, and I
stumbled over a scenario where the keystone authentication code hangs.

My original endpoint list looked like this:

ubuntu@devstack:/opt/stack/devstack$ openstack endpoint list
+--+---+--+-+-+---+--+
| ID   | Region| Service Name | Service Type
| Enabled | Interface | URL  |
+--+---+--+-+-+---+--+
| 0a9979ebfdbf48ce91ccf4e2dd952c1a | RegionOne | kingbird | synchronization 
| True| internal  | http://127.0.0.1:8118/v1.0   |
| 11d5507afe2a4eddb4f030695699114f | RegionOne | placement| placement   
| True| public| http://128.224.186.226/placement |
| 1e42cf139398405188755b7e00aecb4d | RegionOne | keystone | identity
| True| admin | http://128.224.186.226/identity  |
| 2daf99edecae4afba88bb58233595481 | RegionOne | glance   | image   
| True| public| http://128.224.186.226/image |
| 2ece52e8bbb34d47b9bd5611f5959385 | RegionOne | kingbird | synchronization 
| True| admin | http://127.0.0.1:8118/v1.0   |
| 4835a089666a4b03bd2f499457ade6c2 | RegionOne | kingbird | synchronization 
| True| public| http://127.0.0.1:8118/v1.0   |
| 78e9fbc0a47642268eda3e3576920f37 | RegionOne | nova | compute 
| True| public| http://128.224.186.226/compute/v2.1  |
| 96a1e503dc0e4520a190b01f6a0cf79c | RegionOne | keystone | identity
| True| public| http://128.224.186.226/identity  |
| a1887dbc8c5e4af5b4a6dc5ce224b8ff | RegionOne | cinderv2 | volumev2
| True| public| http://128.224.186.226/volume/v2/$(project_id)s  |
| b7d5938141694a4c87adaed5105ea3ab | RegionOne | cinder   | volume  
| True| public| http://128.224.186.226/volume/v1/$(project_id)s  |
| bb169382cbea4715964e4652acd48070 | RegionOne | nova_legacy  | compute_legacy  
| True| public| http://128.224.186.226/compute/v2/$(project_id)s |
| e01c8d8e08874d61b9411045a99d4860 | RegionOne | neutron  | network 
| True| public| http://128.224.186.226:9696/ |
| f94c96ed474249a29a6c0a1bb2b2e500 | RegionOne | cinderv3 | volumev3
| True| public| http://128.224.186.226/volume/v3/$(project_id)s  |
+--+---+--+-+-+---+--+

I was able to successfully run the following python code:

from keystoneauth1 import loading
from keystoneauth1 import loading
from keystoneauth1 import session
from keystoneclient.v3 import client
loader = loading.get_plugin_loader("password")
auth = 
loader.load_from_options(username='admin',password='secret',project_name='admin',auth_url='http://128.224.186.226/identity')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.services.list()

I then duplicated all of the endpoints in a new region "region2", and
was able to run the python code.  When I duplicated all the endpoints
again in a new region "region3" (for a total of 39 endpoints) the python
code hung at the final line.

Removing all the "region3" endpoints allowed the python code to work
again.

During all of this the command "openstack endpoint list" worked fine.

Further testing seems to indicate that it is the third "admin" keystone
endpoint that is causing the problem.  I can add multiple "public"
keystone endpoints, but three or more "admin" keystone endpoints cause
the python code to hang.

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- authentication code hangs when there are many endpoints
+ authentication code hangs when there are three or more admin keystone 
endpoints

** Description changed:

  I'm running stable/pike devstack, and I was playing around with what
  happens when there are many endpoints in multiple regions, and I
  stumbled over a scenario where the keystone authentication code hangs.
  
  My original endpoint list looked like this:
  
  ubuntu@devstack:/opt/stack/devstack$ openstack endpoint list
  
+--+---+--+-+-+---+--+
  | ID   | Region| Service Name | Service Type  
  | Enabled | Interface | URL  |
  
+---

[Yahoo-eng-team] [Bug 1724689] [NEW] Usability: Service user token requested with no auth raises cryptic exception

2017-10-18 Thread Eric Fried
Public bug reported:

If [service_user]send_service_user_token is requested, but no auth
information is provided in the [service_user] section, the resulting
error is cryptic [1] and doesn't help the user understand what went
wrong.

Note that today this only happens if the conf section is missing the
auth_type option.  But this is a pretty good first indicator that the
admin forgot to populate auth options in general.

[1] http://paste.openstack.org/show/623721/

** Affects: nova
 Importance: Undecided
 Assignee: Eric Fried (efried)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724689

Title:
  Usability: Service user token requested with no auth raises cryptic
  exception

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  If [service_user]send_service_user_token is requested, but no auth
  information is provided in the [service_user] section, the resulting
  error is cryptic [1] and doesn't help the user understand what went
  wrong.

  Note that today this only happens if the conf section is missing the
  auth_type option.  But this is a pretty good first indicator that the
  admin forgot to populate auth options in general.

  [1] http://paste.openstack.org/show/623721/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443779] Re: Delete Images button ignore protected images

2017-10-18 Thread Akihiro Motoki
The image table was re-implemented with Angular. The issue was addressed
as part of the effort, so it no longer exists.

** Changed in: horizon
   Status: Triaged => Invalid

** Changed in: horizon
 Assignee: Dimitri Mazmanov (sorantis) => (unassigned)

** Changed in: horizon
Milestone: next => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1443779

Title:
   Delete Images button ignore protected images

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the image panel

  1)When an image is marked as protected then for that image , More
  dropdownlist filters out Delete Image options, but Delete Images
  button is enabled when protected image or snapshot is selected at the
  top of the table

  Steps to reproduce :

  1) Create an image and mark it as protected.

  2) Now in the images table for the newly created protected image the
  row action "Delete Image" will not be available that is ok

  3) But the user can still select the check box for that image which
  will enables the  table action "Delete Images" and after clicking that
  user will get a message that he cannot delete that image since it is a
  protected image.

  4) But i feel that table action also should be disabled for protected
  images so that it will be more meaning full.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1443779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479286] Re: delete the pseudo-folder failed

2017-10-18 Thread Gary W. Smith
This works correctly in Pike.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1479286

Title:
  delete the pseudo-folder failed

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Reproduce the bug:
  1, Login as a general user, create a container as 'ziyu01',
  2, Inside 'ziyu01' container create a pseudo folder, inputing the name as 
"ziyu/ziyu", with a slash in the middle of the string, so we create a pseudo 
folder successfully and it's name is 'ziyu' displaying in the page,
  3,When I click the "Delete Object" button, it cast "Unable to delete object: 
ziyu" error.

  if we create a pseudo folder there is not slash in the middle of it's
  name, the folder can be deleted successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1479286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705781] Re: Object Store Public Container Unable to Disable

2017-10-18 Thread Gary W. Smith
This functionality works correctly with swift in devstack (not using
Ceph).  If this bug is appearing when swift uses Ceph, it would appear
that the swift api is not working consistently across with various
storage back-ends, in which case a bug should be filed with swift.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1705781

Title:
  Object Store Public Container Unable to Disable

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  'Public Access' Checkbox on object store cannot tick off once the
  container publics on Ceph object storage.

  Base on the source code
  
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/swift.py)

  ```
  if public is True:
  public_container_acls = [GLOBAL_READ_ACL, LIST_CONTENTS_ACL]
  headers['x-container-read'] = ",".join(public_container_acls)
  elif public is False:
  headers['x-container-read'] = ""
  ``` 

  Swift API does not accept empty string for 'x-container-read' which
  will not change ACL setting

  Solution:

  input any string instead of an empty string can solve the problem

  headers['x-container-read'] = ""

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1705781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626521] Re: Host aggregate name is not displaying in success message while creating it

2017-10-18 Thread Gary W. Smith
In Pike the message now includes the correct text, including the name of
the created host aggregate

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1626521

Title:
  Host aggregate name is not displaying in success message while
  creating it

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  
  Reproduced in master.
  Steps to reproduce
  1. Go to Admin/System/Host Aggregates
  2. Create Host aggregate.

  Actual result:
  Displaying host aggregate name as Success: Created new host aggregate "Create 
Host Aggregate".

  Expected Result:
  Success: Created new host aggregate "".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1626521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600306] Re: Update metadata (images) CIM namespace metadefs don't work with glance v1

2017-10-18 Thread Gary W. Smith
This works correctly on images metadata update dialog in Queens

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600306

Title:
  Update metadata (images) CIM namespace metadefs don't work with glance
  v1

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The Glance metadefs for CIM do not seem to work for the delete case
  with images using the update metadata widget.

  
http:///admin/metadata_defs/CIM::ProcessorAllocationSettingData/detail

  The metadata widget works fine with them on flavors, but not images.
  I suspect that it has something to do with the glance v1 API and
  colons (:) in some of the values. Maybe this will go away with glance
  v2?  In either case, it needs to be resolved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603579] Re: WebSSO user can't see the "Federation" management panel in Horizon

2017-10-18 Thread Gary W. Smith
Given the comment by blakegc (that it is no longer reproducible), and
the lack of following posts confirming that it is reproducible, marking
as Invalid.  Feel free to reopen if this can be reproduced on a current
horizon version.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603579

Title:
  WebSSO user can't see the "Federation" management panel in Horizon

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Identity (keystone):
  New

Bug description:
  Horizon has the option to set OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT
  in local_settings.py in order to enable the "Federation" management
  panel - that is CRUD operations on Identity Providers and Federation
  Mappings.

  That works fine for a user logging in using Keystone credentials if the user 
has admin role.
  However, if I try to login through WebSSO, using an external Identity 
Provider (ADFS for example), the federated user can't see the "Federation" 
panel even if he has admin role on the projects.

  I imagine this is a bug and I don't see any reason why the federated
  user with admin role doesn't have access to the Federation panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599982] Re: when create a project from the create user modal, the project is not automatically selected

2017-10-18 Thread Gary W. Smith
This works properly in Queens.

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1599982

Title:
  when create a project from the create user modal, the project is not
  automatically selected

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When you create a new project from the create user modal (when
  creating a new user) the new project is not automatically added and
  selected in the projects dropdown. Although this happens, the user is
  able to create the new user within the newly created project as well.

  Steps to reproduce:

  1. Login as Admin and go to Identity>Users panel
  2. Click on "Create User" button
  3. Fill in required fields and for the "Primary Project" click on the "+" 
button at the end of the input
  4. The create project should load. Fill the name input with some random 
project name
  5. Click the "Create" button.

  Expected Results: 
  The Create user modal should be loaded and the "Primary Project" field should 
have the newly created project selected or at least be included as an option in 
the select field. 

  Actual Results:
  The create user modal is loaded but the "Primary Project" does not include 
the newly created project and is not selected as well. Also if you click 
"Create" on the user modal, the user will be created and assigned to the newly 
created project

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1599982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550153] Re: Create an image has Image source field which takes both option at a time for image location and image file thus giving error when "create image" pressed

2017-10-18 Thread Gary W. Smith
This entire page was re-implemented in angular and no longer exhibits
this problem.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1550153

Title:
  Create an image has Image source field which takes both option at a
  time for image location and image file thus giving error  when "create
  image" pressed

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  when Creating an image has Image source field which takes both option
  simultaneously for image location and image file thus giving error
  when "create image" pressed.

  error message being "Can not specify both image and external image
  location."

  solution: the  option should automatically disable sending image
  location if image file is selected or vice versa

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1550153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588780] Re: Project user cannot create/delete port

2017-10-18 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1399252 ***
https://bugs.launchpad.net/bugs/1399252

** This bug has been marked a duplicate of bug 1399252
   Missing port-create in Dashboard as a tenant

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588780

Title:
  Project user cannot create/delete port

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When project user launch instance, user can set a network and create
  port. But in network detail page, user cannot create/delete port. only
  edit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481469] Re: launch instance not respecting picked project

2017-10-18 Thread Gary W. Smith
The launch instance page was reimplemented several releases ago, and
there is no option for an admin to choose a different project other than
the one that the session is currently scoped to.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1481469

Title:
  launch instance not respecting picked project

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Description of problem:

  If you have a user with more than one project and admin role on one of
  the projects you get an additional pane when launching an instance.
  If you pick a project to launch under that isn't the current active
  project and try to launch a new instance the instance is launched
  under the current project not the selected project.

  
  How reproducible: Every time

  Steps to Reproduce:
  1. Have a user with 2 projects and an admin role on one of the projects
  2. log in as that user
  3. identity -> projects -> set as active (project with admin role)
  4. project -> instances -> Launch Instance -> Projects & Users -> Select non 
active project
  5. Launch Instance (simple instance is fine)

  Actual results:
  Instance launched within current project

  
  Expected results:
  Instance launched within requested project

  
  2015.1.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1481469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572864] Re: wait form will be disappeared in integrationtests

2017-10-18 Thread Akihiro Motoki
we no longer maintain the integration tests.

** Tags added: integration-tests

** Changed in: horizon
Milestone: next => None

** Changed in: horizon
 Assignee: Sergei Chipiga (schipiga) => (unassigned)

** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572864

Title:
  wait form will be disappeared in integrationtests

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  We should wait that form will be disappeared after submit or cancel to
  prevent flaky tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370511] Re: allow adding multiple roles for same user

2017-10-18 Thread Akihiro Motoki
As of now, the current "Manage Members" form of "Project" panel supports
multiple role association. This is no longer a valid bug.

** Changed in: horizon
 Assignee: Daniel Park (daniepar) => (unassigned)

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370511

Title:
  allow adding multiple roles for same user

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  [root@tigris01 ~(keystone_admin)]# keystone help user-role-add
  usage: keystone user-role-add --user  --role  [--tenant ]

  Add role to user.

  Arguments:
--user , --user-id , --user_id 
  Name or ID of user.
--role , --role-id , --role_id 
  Name or ID of role.
--tenant , --tenant-id 
  Name or ID of tenant.

  
  in cli we can add multiple roles but can only add one in horizon. it would be 
good if we can add more than one role

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582838] Re: extend volume validation error

2017-10-18 Thread Gary W. Smith
This works correctly in queens: the form accepts the larger size the
second time and correctly extends the volume.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582838

Title:
  extend volume validation error

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When extending an existing volume.

  Under Project

  1. Click Extend Volume
  2. New Size: (select the same size)
  3. Errors out, as it should (New size must be greater than current size)
  4. Go back and select a greater size.
  5. Form errors out (Danger: there was an error submitting the form)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1582838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1689010] Re: Clean up unused get_object method in project instance view

2017-10-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/463118
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0a51d07dfb7a062f37dda584df48981613d275f5
Submitter: Zuul
Branch:master

commit 0a51d07dfb7a062f37dda584df48981613d275f5
Author: wei.ying 
Date:   Sun May 7 14:33:33 2017 +0800

Remove unused function calls in project instances attach volume form

The function get_object wasn't used anywhere, so there is no purpose
to have it there.

Change-Id: I9fa93683ff2d2cffeec9426698d7b95c8841ed23
Closes-Bug: #1689010


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1689010

Title:
  Clean up unused get_object method in project instance view

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In openstack_dashboard/dashboards/project/instances/views.py L:487 & L:523 
there is unused method:
  get_object,
  so there is no purpose to have it there. It should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1689010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582081] Re: Resizing a vm should disallow selecting a flavor with a smaller disk

2017-10-18 Thread Gary W. Smith
As dmsimard indicates, resizing to a smaller disk may be valid.

** Summary changed:

- Should hide the flavor that flavor disk is smaller the old one
+ Resizing a vm should disallow selecting a flavor with a smaller disk

** Tags added: nova

** Changed in: horizon
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582081

Title:
  Resizing a vm should disallow selecting a flavor with a smaller disk

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  When we resize a vm, and select a flavor which flavor disk smaller than old 
one,
  The nova-compute will raise a error:
  ERROR oslo_messaging.rpc.dispatcher ResizeError: Resize error: Unable to 
resize disk down.

  So we should hide the flavor which disk is smaller the old one

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1582081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724729] [NEW] ovs-lib not support qos type egress-policer for ovs-dpdk

2017-10-18 Thread Rong.Wang
Public bug reported:

IN the ovs-dpdk, the Qos cmd use the type "egress-policer", while now the tpye 
is only "linux-htb" in the ovs-lib.py. 
May be the type should be setted by config file, on behalf of ovs or ovs-dpdk.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1724729

Title:
  ovs-lib not support qos type egress-policer for ovs-dpdk

Status in neutron:
  New

Bug description:
  IN the ovs-dpdk, the Qos cmd use the type "egress-policer", while now the 
tpye is only "linux-htb" in the ovs-lib.py. 
  May be the type should be setted by config file, on behalf of ovs or ovs-dpdk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1724729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724589] Re: Unable to transition to Ironic Node Resource Classes in Pike

2017-10-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/513085
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5c2b8675e3e13e32b23681153f226de93bb99628
Submitter: Zuul
Branch:master

commit 5c2b8675e3e13e32b23681153f226de93bb99628
Author: John Garbutt 
Date:   Wed Oct 18 17:05:43 2017 +0100

Keep updating allocations for Ironic

When ironic updates the instance.flavor to require the new custom
resource class, we really need the allocations to get updated. Easiest
way to do that is to make the resource tracker keep updating allocations
for the ironic virt driver. This can be dropped once the transition to
custom resource classes is complete.

If we were not to claim the extra resources, placement will pick nodes
that already have instances running on them when you boot an instance
with a flavor that only requests the custom resource class. This should
be what all ironic flavors do, before the upgrade to queens is
performed.

Closes-Bug: #1724589

Change-Id: Ibbf65a8d817d359786abcdffa6358089ed1107f6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724589

Title:
  Unable to transition to Ironic Node Resource Classes in Pike

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  So the scenario is:

  * upgraded to pike
  * have ironic with a multiple flavor
  * attempting to transition to resource class based scheduling, now pike is 
installed

  In Pike we ask people to:

  * Update Ironic Node with a Resource Class
  * Update flavors to request the new Resource Class (and not request VCPU, 
RAM, DISK), using the docs: 
https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes

  Consider this case:

  * some old instances are running from before the updates
  * some new instances are created after the updates

  In placement:

  * all inventory is correct, new resource class and legacy resource classes 
are both present
  * old instance allocations: only request

  In nova db:

  * old instances and new instances correctly request the new resource class in 
their flavor
  * new instances also include the anti-request for VCPU, DISK and RAM

  Now this is the flow that shows the problem:

  * get list of candidate allocations
  * this includes nodes that already have instances on (they only claim part of 
the inventory, but the new instance is only requesting the bit of the inventory 
the old instance isn't using)
  * boom, scheduling new instances fails after you hit the retry count, unless 
you got lucky and found a free slot by accident

  Possible reason for this:

  * Pike no longer updated instance allocations, if we updated the
  allocations of old instances to request the new custom resource class
  allocations, we would fix the above issue.

  Possible work around:

  * in the new flavor, keep requesting VCPU, RAM and CPU resources for
  pike, fix that up in queens?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp