[Yahoo-eng-team] [Bug 1709747] Re: 500 error MessagingTimeout from API when getting SPICE console

2017-08-09 Thread Guo shuaijie
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709747

Title:
  500 error MessagingTimeout from API when getting SPICE console

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions [None req-03396686-9700-4323-b7e5-1054fcc167d2 
alt_demo admin] Unexpected exception in API method: MessagingTimeout: Timed out 
waiting for a reply to message ID d5abd16bee8846eaa764440df982bfab
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions Traceback (most recent call last):
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 336, in wrapped
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 108, in wrapper
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return func(*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/remote_consoles.py", line 82, in 
get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 192, in wrapped
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return function(self, context, instance, 
*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 182, in inner
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(self, context, instance, *args, **kw)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 3599, in get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/rpcapi.py", 
line 614, in get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
169, in call
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=self.retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 123, 
in _send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions timeout=timeout, retry=retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 578, in send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 567, in _send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions result = self._waiter.wait(msg_id, timeout)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 459, in wait
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions message = self.waiters.get(msg_id, 
timeout=timeout)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 347, in get
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions 'to message ID %s' % msg_id)
  Aug 10 10:08:12 ubuntudbs 

[Yahoo-eng-team] [Bug 1709774] [NEW] Multiple router_centralized_snat interfaces created during Heat deployment

2017-08-09 Thread Matthew Wynne
Public bug reported:

While attempting to deploy the attached hot template I ran into a few
issues:

1. Multiple router_centralized_snat interfaces are being created.
2. One router_centralized_snat interface is created, but it's Down.

When Multiple interfaces are created the stack can't be deleted. I need
to manually delete the additional ports that have been created before
the stack can be deleted.

I'm using Newton with OVS+DVR.


I should state up front that the `depends_on` that are in the template are more 
of a last ditch effort than anything else, and are likely incorrect. However, 
without them the problem still exists.

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "deploy.yml"
   https://bugs.launchpad.net/bugs/1709774/+attachment/4929846/+files/deploy.yml

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709774

Title:
  Multiple router_centralized_snat interfaces created during Heat
  deployment

Status in OpenStack Heat:
  New
Status in neutron:
  New

Bug description:
  While attempting to deploy the attached hot template I ran into a few
  issues:

  1. Multiple router_centralized_snat interfaces are being created.
  2. One router_centralized_snat interface is created, but it's Down.

  When Multiple interfaces are created the stack can't be deleted. I
  need to manually delete the additional ports that have been created
  before the stack can be deleted.

  I'm using Newton with OVS+DVR.

  
  I should state up front that the `depends_on` that are in the template are 
more of a last ditch effort than anything else, and are likely incorrect. 
However, without them the problem still exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1709774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709772] [NEW] EC2 datasource moved to init-local stage

2017-08-09 Thread Chad Smith
Public bug reported:

In EC2 clouds, the only way to determine whether an instance is
configured for IPv6 is by querying the metadata service. In order to
query metadata to determine network configuration, DataSourceEc2 needs
to configure the network with dhcp and then query the datasource.

Add optional functionality for DataSourceEc2 to query metatadata sevice
in init-local timeframe using dhcp.

** Affects: cloud-init
 Importance: Medium
 Assignee: Chad Smith (chad.smith)
 Status: Fix Committed

** Changed in: cloud-init
   Status: New => In Progress

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
 Assignee: (unassigned) => Chad Smith (chad.smith)

** Merge proposal linked:
   
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/328241

** Changed in: cloud-init
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1709772

Title:
  EC2 datasource moved to init-local stage

Status in cloud-init:
  Fix Committed

Bug description:
  In EC2 clouds, the only way to determine whether an instance is
  configured for IPv6 is by querying the metadata service. In order to
  query metadata to determine network configuration, DataSourceEc2 needs
  to configure the network with dhcp and then query the datasource.

  Add optional functionality for DataSourceEc2 to query metatadata
  sevice in init-local timeframe using dhcp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1709772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709765] [NEW] Failed to create keypair in ng create instance form when the quota exceeded

2017-08-09 Thread wei.ying
Public bug reported:

In ng create instance form, we can create and import keypair, keypair
has quota management, when the keypair quota exceeded, if we create or
import keypair, it will fail and the API will return "Quota exceeded,
too many key pairs. (HTTP 403) (Request-ID: req-841e0499-ae34-4029-9a2f-
04a5a6d3e3f7)"

We should like keypairs panel to add quota check in ng create instance
keypair tab page.

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1709765

Title:
  Failed to create keypair in ng create instance form  when the quota
  exceeded

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In ng create instance form, we can create and import keypair, keypair
  has quota management, when the keypair quota exceeded, if we create or
  import keypair, it will fail and the API will return "Quota exceeded,
  too many key pairs. (HTTP 403) (Request-ID: req-841e0499-ae34-4029
  -9a2f-04a5a6d3e3f7)"

  We should like keypairs panel to add quota check in ng create instance
  keypair tab page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1709765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709761] [NEW] Add cloudinit-analyze reporting to cloudinit proper

2017-08-09 Thread Chad Smith
Public bug reported:

Pull in functionality from Ryan Harper's
https://git.launchpad.net/~raharper/+git/cloudinit-analyze  into
cloudinit proper so that this tooling can be leveraged more easily by
any cloud-init consumer.

** Affects: cloud-init
 Importance: Medium
 Assignee: Chad Smith (chad.smith)
 Status: In Progress

** Changed in: cloud-init
   Importance: Undecided => High

** Changed in: cloud-init
 Assignee: (unassigned) => Chad Smith (chad.smith)

** Changed in: cloud-init
   Importance: High => Medium

** Changed in: cloud-init
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1709761

Title:
  Add cloudinit-analyze reporting to cloudinit proper

Status in cloud-init:
  In Progress

Bug description:
  Pull in functionality from Ryan Harper's
  https://git.launchpad.net/~raharper/+git/cloudinit-analyze  into
  cloudinit proper so that this tooling can be leveraged more easily by
  any cloud-init consumer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1709761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709747] Re: HTTP exception thrown: Unexpected API Error.

2017-08-09 Thread Matt Riedemann
Alternatively, do you see any issues in the nova-consoleauth logs?

** Changed in: nova
   Status: Invalid => Incomplete

** Summary changed:

- HTTP exception thrown: Unexpected API Error.
+ 500 from API when getting SPICE console

** Summary changed:

- 500 from API when getting SPICE console
+ 500 error MessagingTimeout from API when getting SPICE console

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709747

Title:
  500 error MessagingTimeout from API when getting SPICE console

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions [None req-03396686-9700-4323-b7e5-1054fcc167d2 
alt_demo admin] Unexpected exception in API method: MessagingTimeout: Timed out 
waiting for a reply to message ID d5abd16bee8846eaa764440df982bfab
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions Traceback (most recent call last):
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 336, in wrapped
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 108, in wrapper
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return func(*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/remote_consoles.py", line 82, in 
get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 192, in wrapped
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return function(self, context, instance, 
*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 182, in inner
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(self, context, instance, *args, **kw)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 3599, in get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/rpcapi.py", 
line 614, in get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
169, in call
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=self.retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 123, 
in _send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions timeout=timeout, retry=retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 578, in send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 567, in _send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions result = self._waiter.wait(msg_id, timeout)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 459, in wait
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions message = self.waiters.get(msg_id, 
timeout=timeout)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: 

[Yahoo-eng-team] [Bug 1709747] Re: HTTP exception thrown: Unexpected API Error.

2017-08-09 Thread Matt Riedemann
Is a spice console even properly configured and enabled on the compute
service for the instance?

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709747

Title:
  500 error MessagingTimeout from API when getting SPICE console

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions [None req-03396686-9700-4323-b7e5-1054fcc167d2 
alt_demo admin] Unexpected exception in API method: MessagingTimeout: Timed out 
waiting for a reply to message ID d5abd16bee8846eaa764440df982bfab
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions Traceback (most recent call last):
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 336, in wrapped
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 108, in wrapper
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return func(*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/remote_consoles.py", line 82, in 
get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 192, in wrapped
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return function(self, context, instance, 
*args, **kwargs)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 182, in inner
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(self, context, instance, *args, **kw)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 3599, in get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/rpcapi.py", 
line 614, in get_spice_console
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
169, in call
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=self.retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 123, 
in _send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions timeout=timeout, retry=retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 578, in send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=retry)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 567, in _send
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions result = self._waiter.wait(msg_id, timeout)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 459, in wait
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions message = self.waiters.get(msg_id, 
timeout=timeout)
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 347, in get
  Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 

[Yahoo-eng-team] [Bug 1709550] Re: nova-compute doesn't start if there is difference between current compute driver and driver which was used to create instance

2017-08-09 Thread Matt Riedemann
This is a bug in the nova-lxd driver code, which manages bugs through
some other launchpad project.

** Tags added: lxd

** Changed in: nova
   Status: New => Invalid

** Also affects: nova-lxd
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709550

Title:
  nova-compute doesn't start if there is difference between current
  compute driver and driver which was used to create instance

Status in OpenStack Compute (nova):
  Invalid
Status in nova-lxd:
  New

Bug description:
  Steps to reproduce
  ==
  1. Create instance with (for example) qemu as nova-compute backend.
  2. Change nova-compute backend to (for example) lxd.
  3. Restart nova-compute service

  Expected result
  ===
  I expected to see an error with something like this: "You have to delete all 
your instances, which were created with old nova-compute driver. Use 'openstack 
server delete instance-name' on your controller node."

  Actual result
  =
  nova-compute service doesn't start and there is no clear explanation in 
nova-compute.log (see log below).

  Environment
  ===
  1. Version of OpenStack is Ocata:
  user@compute ~> dpkg -l | grep nova
  rc  nova-api   2:15.0.2-0ubuntu1~cloud0   
  all  OpenStack Compute - API frontend
  ii  nova-common2:15.0.5-0ubuntu1~cloud0   
  all  OpenStack Compute - common files
  ii  nova-compute   2:15.0.5-0ubuntu1~cloud0   
  all  OpenStack Compute - compute node base
  rc  nova-compute-kvm   2:15.0.5-0ubuntu1~cloud0   
  all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt   2:15.0.5-0ubuntu1~cloud0   
  all  OpenStack Compute - compute node libvirt support
  ii  nova-compute-lxd   15.0.2-0ubuntu1~cloud0 
  all  Openstack Compute - LXD container hypervisor support
  rc  nova-conductor 2:15.0.2-0ubuntu1~cloud0   
  all  OpenStack Compute - conductor service
  rc  nova-consoleauth   2:15.0.2-0ubuntu1~cloud0   
  all  OpenStack Compute - Console Authenticator
  rc  nova-novncproxy2:15.0.2-0ubuntu1~cloud0   
  all  OpenStack Compute - NoVNC proxy
  rc  nova-placement-api 2:15.0.2-0ubuntu1~cloud0   
  all  OpenStack Compute - placement API frontend
  rc  nova-scheduler 2:15.0.2-0ubuntu1~cloud0   
  all  OpenStack Compute - virtual machine scheduler
  ii  python-nova2:15.0.5-0ubuntu1~cloud0   
  all  OpenStack Compute Python libraries
  ii  python-nova-lxd15.0.2-0ubuntu1~cloud0 
  all  OpenStack Compute Python libraries - LXD driver
  ii  python-novaclient  2:7.1.0-0ubuntu1~cloud0
  all  client library for OpenStack Compute API - Python 2.7

  
  2. Hypervisors: qemu and lxd

  3. Storage: lvm

  4. Networking type: Neutron with OpenVSwitch

  Log
  ==
  nova-compute.log:
  2017-08-08 16:18:51.112 29592 INFO nova.service [-] Starting compute node 
(version 15.0.5)
  2017-08-08 16:18:51.882 29592 INFO oslo.privsep.daemon 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Running privsep helper: 
['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', 
'--config-file', '/etc/nova/nova.conf', '--config-file', 
'/etc/nova/nova-compute.conf', '--privsep_context', 
'vif_plug_linux_bridge.privsep.vif_plug', '--privsep_sock_path', 
'/tmp/tmpGK77AD/privsep.sock']
  2017-08-08 16:18:53.810 29592 INFO oslo.privsep.daemon 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Spawned new privsep daemon 
via rootwrap
  2017-08-08 16:18:53.812 29592 INFO oslo.privsep.daemon [-] privsep daemon 
starting
  2017-08-08 16:18:53.812 29592 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
  2017-08-08 16:18:53.813 29592 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): CAP_NET_ADMIN/CAP_NET_ADMIN/none
  2017-08-08 16:18:53.813 29592 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 29634
  2017-08-08 16:18:53.957 29592 INFO os_vif 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Successfully plugged vif 

[Yahoo-eng-team] [Bug 1709747] [NEW] HTTP exception thrown: Unexpected API Error.

2017-08-09 Thread Guo shuaijie
Public bug reported:

Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions [None req-03396686-9700-4323-b7e5-1054fcc167d2 
alt_demo admin] Unexpected exception in API method: MessagingTimeout: Timed out 
waiting for a reply to message ID d5abd16bee8846eaa764440df982bfab
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions Traceback (most recent call last):
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 336, in wrapped
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(*args, **kwargs)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 108, in wrapper
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return func(*args, **kwargs)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/remote_consoles.py", line 82, in 
get_spice_console
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions console_type)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 192, in wrapped
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return function(self, context, instance, 
*args, **kwargs)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 182, in inner
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions return f(self, context, instance, *args, **kw)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/api.py", 
line 3599, in get_spice_console
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/nova/nova/compute/rpcapi.py", 
line 614, in get_spice_console
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions instance=instance, console_type=console_type)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
169, in call
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=self.retry)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 123, 
in _send
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions timeout=timeout, retry=retry)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 578, in send
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions retry=retry)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 567, in _send
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions result = self._waiter.wait(msg_id, timeout)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 459, in wait
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions message = self.waiters.get(msg_id, 
timeout=timeout)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 347, in get
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions 'to message ID %s' % msg_id)
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions MessagingTimeout: Timed out waiting for a reply 
to message ID d5abd16bee8846eaa764440df982bfab
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: ERROR 
nova.api.openstack.extensions 
Aug 10 10:08:12 ubuntudbs devstack@n-api.service[8770]: INFO 
nova.api.openstack.wsgi [None req-03396686-9700-4323-b7e5-1054fcc167d2 alt_demo 
admin] HTTP exception 

[Yahoo-eng-team] [Bug 1588860] Re: keystone-manage bootstrap cannot recover admin account

2017-08-09 Thread Morgan Fainberg
Mitaka is EOL

** Changed in: keystone/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1588860

Title:
  keystone-manage bootstrap cannot recover admin account

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Fix Released

Bug description:
  The keystone-manage bootstrap command is intended to supersede the
  admin_token middleware. However, one of the common use cases for the
  admin_token middleware was to provide a recovery mechanism for cloud
  operators that had accidentally disabled themselves or lost their
  password.

  However, even after attempting to "re-bootstrap" an existing admin
  with a known password (effectively performing a password reset), the
  admin is still not able to authenticate. The same is true if the admin
  was disabled.

  This was originally reported in #openstack-ansible by odyssey4me:

  [Fri 09:29]  dolphm lbragstad is keystone-manage bootstrap meant 
to skip the bootstrap if there are already settings in place? what is the right 
way to fix up creds that are lost somehow for the keystone admin?
  [Fri 09:30]  odyssey4me: bootstrap should be idempotent, but i don't 
think it'll change an admin's password if you specify something different
  [Fri 09:31]  dolphm so the options are, I guess, to delete the 
admin account in the db or to use the auth_token middleware?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1588860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579604] Re: project delete returns 501 NotImplemented with templated catalog

2017-08-09 Thread Morgan Fainberg
Mitaka is EOL

** Changed in: keystone/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1579604

Title:
  project delete returns 501 NotImplemented with templated catalog

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Fix Released

Bug description:
  Have upgraded to Mitaka and getting a 501 when deleting a project.
  This happens in both v2 and v3 api. The project actually deletes.

  Am using stable/mitaka branch and the sql backend


  
  $ keystone tenant-create --name deleteme

  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 5fafe2512fb3404ead999c30a23d0107 |
  | name| deleteme |
  +-+--+

  
  $ keystone tenant-delete 5fafe2512fb3404ead999c30a23d0107

  The action you have requested has not been implemented. (HTTP 501)
  (Request-ID: req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc)

  
  $ keystone tenant-get 5fafe2512fb3404ead999c30a23d0107

  No tenant with a name or ID of '5fafe2512fb3404ead999c30a23d0107'
  exists.



  In logs:

  2016-05-09 12:06:40.265 16723 WARNING keystone.common.wsgi 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] The action you have 
requested has not been implemented.
  2016-05-09 12:06:40.269 16723 INFO eventlet.wsgi.server 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] 128.250.116.173 - - 
[09/May/2016 12:06:40] "DELETE /v2.0/tenants/5fafe2512fb3404ead999c30a23d0107 
HTTP/1.1" 501 354 0.223312

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1579604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621626] Re: Unauthenticated requests return information

2017-08-09 Thread Morgan Fainberg
Mitaka is EOL

** Changed in: keystone/mitaka
   Status: New => Won't Fix

** Changed in: keystone/mitaka
   Status: Won't Fix => Fix Released

** Changed in: keystone/mitaka
   Status: Fix Released => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1621626

Title:
  Unauthenticated requests return information

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  
  I can get information back on an unauthenticated request.

   $ curl 
http://192.168.122.126:35357/v3/projects/8d34a533f85b423e8589061cde451edd/users/68ec7d9b6e464649b11d1340d5e05666/roles/ca314e7f7faf4f948bf6e7cf2077806e
   {"error": {"message": "Could not find role: 
ca314e7f7faf4f948bf6e7cf2077806e", "code": 404, "title": "Not Found"}}

  This should have returned 401 Unauthenticated, like this:

   $ curl http://192.168.122.126:35357/v3/projects
   {"error": {"message": "The request you have made requires authentication.", 
"code": 401, "title": "Unauthorized"}}

  To recreate, just start up devstack on stable/mitaka and do the above
  request.

  I tried this on master and it's fixed. Probably by
  https://review.openstack.org/#/c/339356/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1621626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629446] Re: federated login fails after user is removed from group

2017-08-09 Thread Morgan Fainberg
Mitaka is EOL

** Changed in: keystone/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1629446

Title:
  federated login fails after user is removed from group

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  Fix Committed

Bug description:
  A user part of a group in auth0 tries to login in using the mapping
  below just fine

  [
  {
  "local": [
  {
  "user": {
  "name": "{1}::{0}"
  }
  },
  {
  "domain": {
  "id": "default"
  },
  "groups": "{1}"
  }
  ],
  "remote": [
  {
  "type": "HTTP_OIDC_CLAIM_EMAIL"
  },
  {
  "type": "HTTP_OIDC_CLAIM_GROUPS"
  }
  ]
  }
  ]

  
  Once the user is removed from the group in auth0 and tries to login :

  Expected Result:
  Failed to log on to horizon as federation user using OpenID Connect protocol 
and got 401 code:

  {"error": {"message": "The request you have made requires
  authentication.", "code": 401, "title": "Unauthorized"}}

  Actual Result:
  Got 500 instead of 401

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request.", "code": 500, "title": "Internal Server
  Error"}}

  error in keystone-all.logs:

  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi 
[req-f5f27f59-788b-494b-9719-bcdbb6b628c0 - - - - -] unexpected EOF while 
parsing (, line 0)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 249, in __call__
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi result = 
method(context, **params)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/controllers.py",
 line 329, in federated_idp_specific_sso_auth
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi res = 
self.federated_authentication(context, idp_id, protocol_id)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/controllers.py",
 line 302, in federated_authentication
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi return 
self.authenticate_for_token(context, auth=auth)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/controllers.py",
 line 396, in authenticate_for_token
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/controllers.py",
 line 520, in authenticate
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi auth_context)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py",
 line 65, in authenticate
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi 
self.identity_api)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py",
 line 141, in handle_unscoped_token
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi federation_api, 
identity_api)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py",
 line 194, in apply_mapping_filter
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi 
identity_provider, protocol, assertion)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/common/manager.py",
 line 124, in wrapped
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/keystone/federation/core.py",
 line 98, in evaluate
  2016-09-30 19:32:25.549 23311 ERROR keystone.common.wsgi 
mapped_properties = rule_processor.process(assertion_data)
  

[Yahoo-eng-team] [Bug 1701541] Re: Keystone v3/roles has differnt response for HEAD and GET (again)

2017-08-09 Thread Morgan Fainberg
As per lance, this is being marked as wont fix. we can re-visit when/if
microversions or v4 is implemented.

** Changed in: keystone
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1701541

Title:
  Keystone v3/roles has differnt response for HEAD and GET (again)

Status in OpenStack Identity (keystone):
  Won't Fix
Status in tempest:
  In Progress

Bug description:
  The issue is very similar to the one already discussed at 
  https://bugs.launchpad.net/keystone/+bug/1334368 , 
http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html .

  # curl -v -X HEAD  
http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 -H "Content-Type: application/json" -H "X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo"
  * About to connect() to 172.17.1.18 port 5000 (#0)
  *   Trying 172.17.1.18...
  * Connected to 172.17.1.18 (172.17.1.18) port 5000 (#0)
  > HEAD 
/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 HTTP/1.1
  > User-Agent: curl/7.29.0
  > Host: 172.17.1.18:5000
  > Accept: */*
  > Content-Type: application/json
  > X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo
  > 
  < HTTP/1.1 204 No Content
  < Date: Fri, 30 Jun 2017 10:09:30 GMT
  < Server: Apache
  < Vary: X-Auth-Token
  < x-openstack-request-id: req-e64410ae-5d4a-48f7-8508-615752877277
  < Content-Type: text/plain
  < 
  * Connection #0 to host 172.17.1.18 left intact

  # curl -v -X GET  
http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 -H "Content-Type: application/json" -H "X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo"
  * About to connect() to 172.17.1.18 port 5000 (#0)
  *   Trying 172.17.1.18...
  * Connected to 172.17.1.18 (172.17.1.18) port 5000 (#0)
  > GET 
/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 HTTP/1.1
  > User-Agent: curl/7.29.0
  > Host: 172.17.1.18:5000
  > Accept: */*
  > Content-Type: application/json
  > X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo
  > 
  < HTTP/1.1 200 OK
  < Date: Fri, 30 Jun 2017 10:09:38 GMT
  < Server: Apache
  < Content-Length: 507
  < Vary: X-Auth-Token,Accept-Encoding
  < x-openstack-request-id: req-cc320571-a59d-4ea2-b459-117053367c55
  < Content-Type: application/json
  < 
  * Connection #0 to host 172.17.1.18 left intact
  {"role_inference": {"implies": {"id": "11b21cc37d7644c8bc955ff956b2d56e", 
"links": {"self": 
"http://172.17.1.18:5000/v3/roles/11b21cc37d7644c8bc955ff956b2d56e"}, "name": 
"tempest-role-1212191884"}, "prior_role": {"id": 
"7acb026c29a24fb2a1d92a4e5291de24", "links": {"self": 
"http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24"}, "name": 
"tempest-role-500046640"}}, "links": {"self": 
"http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92

  
  mod_wsgi based on the version and configuration (WSGIMapHEADToGET (requires 
mod_wsgi >= 4.3.0)) mod_wsgi might send GET instead of HEAD in order to avoid 
invalid responses being cached in case of an application bug.

  Unfortunately tempest expects the wrong behavior, is it also needs to
  be changed,

  
tempest.api.identity.admin.v3.test_roles.RolesV3TestJSON.test_implied_roles_create_check_show_delete[id-c90c316c-d706-4728-bcba-eb1912081b69]
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/site-packages/tempest/api/identity/admin/v3/test_roles.py", 
line 228, in test_implied_roles_create_check_show_delete
  prior_role_id, implies_role_id)
File 
"/usr/lib/python2.7/site-packages/tempest/lib/services/identity/v3/roles_client.py",
 line 233, in check_role_inference_rule
  self.expected_success(204, resp.status)
File 
"/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 252, 
in expected_success
  raise exceptions.InvalidHttpSuccessCode(details)
  tempest.lib.exceptions.InvalidHttpSuccessCode: The success code is 
different than the expected one
  Details: Unexpected http success status 

[Yahoo-eng-team] [Bug 1644862] Re: domain ldap tls_cacertfile "forgotten" in multidomain configuration

2017-08-09 Thread Morgan Fainberg
Mitaka is EOL

** Changed in: keystone/mitaka
   Status: New => Won't Fix

** Changed in: keystone/mitaka
   Status: Won't Fix => Fix Released

** Changed in: keystone/mitaka
   Status: Fix Released => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1644862

Title:
  domain ldap tls_cacertfile "forgotten" in multidomain configuration

Status in OpenStack Identity (keystone):
  Triaged
Status in OpenStack Identity (keystone) mitaka series:
  Won't Fix

Bug description:
  Environment:
  Centos 7 using the OpenStack Mitaka release

  RPMS from:
  http://mirror.centos.org/centos/7/cloud/$basearch/openstack-mitaka/

  openstack-keystone-9.2.0-1.el7.noarch

  —

  I have a multidomain configuration with multiple AD backends in
  keystone.

  For one of the AD configurations I've configured a custom
  tls_cacertfile as follows:

  «
  [identity]
  driver = ldap

  [assignment]
  driver = ldap

  [ldap]
  url  = ldap://dc1.domain1.ca ldap://dc1.domain1.ca
  use_tls  = true
  …
  »

  For the other:

  «
  [identity]
  driver = ldap

  [assignment]
  driver = ldap

  [ldap]
  url  = ldap://dc1.domain2.ca ldap://dc2.domain2.ca
  query_scope  = sub
  use_tls  = true
  tls_cacertfile   = /etc/keystone/domains/domain2-caroot.pem
  …
  »

  What I've observed is when logging in to domain2 I will get very
  inconsistent behaviour:

  * sometimes fails: "Unable to retrieve authorized projects."
  * sometimes fails: "An error occurred authenticating. Please try again later."
  * sometimes fails: "Unable to authenticate to any available projects."
  * sometimes fails: "Invalid credentials."
  * sometimes succeeds

  Example traceback from keystone log:
  «
  2016-11-25 09:54:06.699 27879 INFO keystone.common.wsgi 
[req-c145506b-69fc-4fc2-9bad-76d77a79e3ca - - - - -] POST 
http://os-controller.lab.netdirect.ca:5000/v3/auth/tokens
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi 
[req-c145506b-69fc-4fc2-9bad-76d77a79e3ca - - - - -] {'info': "TLS error 
-8179:Peer's Certificate issuer is not recognized.", 'desc': 'Connect error'}
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  …
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/ldappool/__init__.py", line 224, in 
_create_connector
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi raise 
BackendError(str(exc), backend=conn)
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi BackendError: 
{'info': "TLS error -8179:Peer's Certificate issuer is not recognized.", 
'desc': 'Connect error'}
  »

  I've also tried putting a merged tls_cacertfile containing the system
  default ca roots and the domain2-specific ca root. That felt like it
  improved but did not fix the problem.

  The workaround is putting the merged cacertfile into BOTH domain
  configurations, which should not be necessary. After doing so I
  haven't had any trouble.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1644862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587777] Re: Mitaka: dashboard performance

2017-08-09 Thread Morgan Fainberg
I am marking this bug closed as the two patches in #17 have merged (inc.
the backport).

** Changed in: keystone
   Status: New => Fix Released

** Changed in: keystone
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/158

Title:
  Mitaka: dashboard performance

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Environment: Openstack Mitaka on top of Leap 42.1, 1 control node, 2
  compute nodes, 3-node-Ceph-cluster.

  Issue: Since switching to Mitaka, we're experiencing severe delays
  when accessing the dashboard - i.e. switching between "Compute -
  Overview" and "Compute - Instances" takes 15+ seconds, even after
  multiple invocations.

  Steps to reproduce:
  1. Install Openstack Mitaka, incl. dashboard & navigate through the dashboard.

  Expected result:
  Browsing through the dashboard with reasonable waiting times.

  Actual result:
  Refreshing the dashboard can take up to 30 secs, switching between views 
(e.g. volumes to instances) takes about 15 secs in average.

  Additional information:
  I've had a look at the requests, the Apache logs and our control node's stats 
and noticed that it's a single call that's taking all the time... I see no 
indications of any error, it seems that once WSGI is invoked, that call simply 
takes its time. Intermediate curl requests are logged, so I see it's doing its 
work. Looking at "vmstat" I can see that it's user space taking all the load 
(Apache / mod_wsgi drives its CPU to 100%, while other CPUs are idle - and no 
i/o wait, no system space etc.).

  ---cut here---
  control1:/var/log # top
  top - 10:51:35 up 8 days, 18:16,  2 users,  load average: 2,17, 1,65, 1,48
  Tasks: 383 total,   2 running, 381 sleeping,   0 stopped,   0 zombie
  %Cpu0  : 31,7 us,  2,9 sy,  0,0 ni, 65,0 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu1  : 13,1 us,  0,7 sy,  0,0 ni, 86,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu2  : 17,2 us,  0,7 sy,  0,0 ni, 81,2 id,  1,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu3  : 69,4 us, 12,6 sy,  0,0 ni, 17,9 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu4  : 14,6 us,  1,0 sy,  0,0 ni, 84,4 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu5  : 16,9 us,  0,7 sy,  0,0 ni, 81,7 id,  0,7 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu6  : 17,3 us,  1,3 sy,  0,0 ni, 81,0 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu7  : 21,2 us,  1,3 sy,  0,0 ni, 77,5 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  KiB Mem:  65943260 total, 62907676 used,  3035584 free, 1708 buffers
  KiB Swap:  2103292 total,0 used,  2103292 free. 53438560 cached Mem

PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ COMMAND
   6776 wwwrun20   0  565212 184504  13352 S 100,3 0,280   0:07.83 
httpd-prefork
   1130 root  20   0  399456  35760  22508 S 5,980 0,054 818:13.17 X
   1558 sddm  20   0  922744 130440  72148 S 5,316 0,198 966:03.82 
sddm-greeter
  20999 nova  20   0  285888 116292   5696 S 2,658 0,176 164:27.08 
nova-conductor
  21030 nova  20   0  758752 182644  16512 S 2,658 0,277  58:20.40 nova-api
  18757 heat  20   0  273912  73740   4612 S 2,326 0,112  50:48.72 
heat-engine
  18759 heat  20   0  273912  73688   4612 S 2,326 0,112   4:27.54 
heat-engine
  20995 nova  20   0  286236 116644   5696 S 2,326 0,177 164:38.89 
nova-conductor
  21027 nova  20   0  756204 180752  16980 S 2,326 0,274  58:20.09 nova-api
  21029 nova  20   0  756536 180644  16496 S 2,326 0,274 139:46.29 nova-api
  21031 nova  20   0  756888 180920  16512 S 2,326 0,274  58:36.37 nova-api
  24771 glance20   0 2312152 139000  17360 S 2,326 0,211  24:47.83 
glance-api
  24772 glance20   0  631672 111248   4848 S 2,326 0,169  22:59.77 
glance-api
  28424 cinder20   0  720972 108536   4968 S 2,326 0,165  28:31.42 
cinder-api
  28758 neutron   20   0  317708 101812   4472 S 2,326 0,154 153:45.55 
neutron-server

  #

  control1:/var/log # vmstat 1
  procs ---memory-- ---swap-- -io -system-- 
--cpu-
   r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa 
st
   1  0  0 2253144   1708 5344047200 46044 11  1 88 
 0  0
   0  0  0 2255588   1708 5344047600 0   568 3063 7627 15  1 83 
 0  0
   1  0  0 2247596   1708 5344047600 0   144 3066 6803 14  2 83 
 0  0
   1  0  0 2156008   1708 5344047600 072 3474 7193 25  3 72 
 0  0
   2  0  0 2131968   1708 5344048400 0   652 3497 8565 28  2 70 
 0  0
   3  1  0 2134000   1708 5344051200 0 14340 3629 10644 25  2 
71  2  0
   2  0  0 2136956   1708 5344058000 012 3483 10620 25  2 
70  3  0
   9  1  0 2138164   1708 5344059600 0   248 3442 9980 27  1 72 
 0  0
   4 

[Yahoo-eng-team] [Bug 1681348] Re: keystone list project api returns empty if "?name=" is added as url parameter

2017-08-09 Thread Morgan Fainberg
Unfortunately, we cannot change the behavior without a microversion
uspport or something similar. ?name= will need to maintain returning an
empty list, as that is the contract. I am closing this as wont fix.

** Changed in: keystone
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1681348

Title:
  keystone list project api returns empty if "?name=" is added as url
  parameter

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  request: https://{{keystone_ip}}:5000/v3/projects?name=
  expect: returns all projects of current user.
  but: return empty.

  Other OpenStack components obey this convention properly, so keystone
  is inconsistent with them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1681348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663458] Re: brutal stop of ovs-agent doesn't kill ryu controller

2017-08-09 Thread Ben Nemec
It looks like this was fixed a while ago.  Feel free to reopen if I'm
mistaken.

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663458

Title:
  brutal stop of ovs-agent doesn't kill ryu controller

Status in neutron:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  It seems like when we kill neutron-ovs-agent and start it again, the
  ryu controller fails to start because the previous instance (in
  eventlet) is still running.

  (... ovs agent is failing to start and is brutally killed)

  Trying to start the process 5 minutes later:
  INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 
10.0.0.0rc2.dev33
  INFO ryu.base.app_manager [-] loading app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp
  INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service
  INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler
  INFO ryu.base.app_manager [-] instantiating app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of 
OVSNeutronAgentRyuApp
  INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of 
OFPHandler
  INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of 
OfctlService
  ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call 
last):
File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 54, in _launch
  return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
120, in server_loop
  datapath_connection_factory)
File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line 43, 
in listen
  sock.bind(addr)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 98] Address already in use
  INFO neutron.agent.ovsdb.native.vlog [-] tcp:127.0.0.1:6640: connecting...
  INFO neutron.agent.ovsdb.native.vlog [-] tcp:127.0.0.1:6640: connected
  INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[-] Bridge br-int has datapath-ID badb62a6184f
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

  I haven't figured out yet how the previous instance of ovs agent was
  killed (my theory is that Puppet killed it but I don't have the
  killing code yet, I'll update the bug asap).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1663458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709715] [NEW] cloud-init apply_net_config_names doesn't grok v2 configs

2017-08-09 Thread Ryan Harper
Public bug reported:

when supplying cloud-init with a network-configuration in version:2
format, the rename code doesn't find any of the set-name parameters as
the function expects a v1 config format.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1709715

Title:
  cloud-init apply_net_config_names doesn't grok v2 configs

Status in cloud-init:
  New

Bug description:
  when supplying cloud-init with a network-configuration in version:2
  format, the rename code doesn't find any of the set-name parameters as
  the function expects a v1 config format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1709715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524916] Re: neutron-ns-metadata-proxy uses ~25MB/router in production

2017-08-09 Thread Ben Nemec
As others have noted, the rpm upgrade process should handle updating the
rootwrap filters.  The only exception would be if a user edited them
after installation, but in that case they're responsible for merging in
the updated ones themselves.

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524916

Title:
  neutron-ns-metadata-proxy uses ~25MB/router in production

Status in neutron:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  [root@mac6cae8b61e442 memexplore]# ./memexplore.py all metadata-proxy | cut 
-c 1-67
  25778 kB  (pid 420) /usr/bin/python /bin/neutron-ns-metadata-proxy -
  25774 kB  (pid 1468) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 1472) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 1474) /usr/bin/python /bin/neutron-ns-metadata-proxy
  26528 kB  (pid 1489) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 1520) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 1738) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 1814) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 2024) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 3961) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 4076) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 4099) /usr/bin/python /bin/neutron-ns-metadata-proxy
  [...]
  25778 kB  (pid 31386) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 31403) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 31416) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25778 kB  (pid 31453) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 31483) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25770 kB  (pid 31647) /usr/bin/python /bin/neutron-ns-metadata-proxy
  25774 kB  (pid 31743) /usr/bin/python /bin/neutron-ns-metadata-proxy

  2,581,230 kB Total PSS

  if we look explicitly at one of those processes we see:

  # ./memexplore.py pss 24039
  0 kB  7f97db981000-7f97dbb81000 ---p 0005f000 fd:00 4298776438
 /usr/lib64/libpcre.so.1.2.0
  0 kB  7f97dbb83000-7f97dbba4000 r-xp  fd:00 4298776486
 /usr/lib64/libselinux.so.1
  0 kB  7fff16ffe000-7fff1700 r-xp  00:00 0 
 [vdso]
  0 kB  7f97dacb5000-7f97dacd1000 r-xp  fd:00 4298779123
 /usr/lib64/python2.7/lib-dynload/_io.so
  0 kB  7f97d6a06000-7f97d6c05000 ---p 000b1000 fd:00 4298777149
 /usr/lib64/libsqlite3.so.0.8.6
  [...]
  0 kB  7f97d813a000-7f97d8339000 ---p b000 fd:00 4298779157
 /usr/lib64/python2.7/lib-dynload/pyexpat.so
  0 kB  7f97dbba4000-7f97dbda4000 ---p 00021000 fd:00 4298776486
 /usr/lib64/libselinux.so.1
  0 kB  7f9
  7db4f7000-7f97db4fb000 r-xp  fd:00 4298779139 
/usr/lib64/python2.7/lib-dynload/cStringIO.so
  0 kB  7f97dc81e000-7f97dc81f000 rw-p  00:00 0
  0 kB  7f97d8545000-7f97d8557000 r-xp  fd:00 4298779138
 /usr/lib64/python2.7/lib-dynload/cPickle.so
  0 kB  7f97d9fd3000-7f97d9fd7000 r-xp  fd:00 4298779165
 /usr/lib64/python2.7/lib-dynload/timemodule.so
  0 kB  7f97d99c4000-7f97d9bc3000 ---p 2000 fd:00 4298779147
 /usr/lib64/python2.7/lib-dynload/grpmodule.so
  0 kB  7f97daedb000-7f97daede000 r-xp  fd:00 4298779121
 /usr/lib64/python2.7/lib-dynload/_heapq.so
  0 kB  7f97ddfd4000-7f97ddfd7000 r-xp  fd:00 4298779119
 /usr/lib64/python2.7/lib-dynload/_functoolsmodule.so
  0 kB  7f97d8b67000-7f97d8b78000 r-xp  fd:00 4298779141
 /usr/lib64/python2.7/lib-dynload/datetime.so
  0 kB  7f97d7631000-7f97d7635000 r-xp  fd:00 4298776496
 /usr/lib64/libuuid.so.1.3.0
  0 kB  7f97dd59e000-7f97dd5a6000 r-xp  fd:00 4298779132
 /usr/lib64/python2.7/lib-dynload/_ssl.so
  0 kB  7f97dbfc-7f97dbfc2000 rw-p  00:00 0
  0 kB  7f97dd332000-7f97dd394000 r-xp  fd:00 4298776137
 /usr/lib64/libssl.so.1.0.1e
  0 kB  7f97d6e22000-7f97d7021000 ---p 4000 fd:00 6442649369
 /usr/lib64/python2.7/site-packages/sqlalchemy/cresultproxy.so
  0 kB  7f97d95bb000-7f97d97ba000 ---p b000 fd:00 4298779156
 /usr/lib64/python2.7/lib-dynload/parsermodule.so
  0 kB  7f97da3dd000-7f97da3e r-xp  fd:00 4298779129
 /usr/lib64/python2.7/lib-dynload/_randommodule.so
  0 kB  7f97dddcf000-7f973000 r-xp  fd:00 4298779125
 /usr/lib64/python2.7/lib-dynload/_localemodule.so
  0 kB  7f97da7e5000-7f97da7ea000 r-xp  fd:00 4298779136   

[Yahoo-eng-team] [Bug 1602400] Re: os-quota-class-sets APIs are undocumented

2017-08-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/477740
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=6932f2a84171a44b29482cd3d5e93629e8e869d8
Submitter: Jenkins
Branch:master

commit 6932f2a84171a44b29482cd3d5e93629e8e869d8
Author: Felipe Monteiro 
Date:   Tue Jun 27 05:22:00 2017 +0100

[api-ref] Add api-ref for os-quota-class-sets APIs

This commit adds the api documentation for the
GET/PUT os-quota-class-set APIs (v2 and v3).

Change-Id: Idb51b7b90a081775d2d836bf6d9ec8b9c0399f1b
Closes-Bug: #1602400


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602400

Title:
  os-quota-class-sets APIs are undocumented

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://developer.openstack.org/api-ref does not document the os-quota-
  class-sets APIs for either nova or cinder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1602400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709693] [NEW] Cannot create network in the admin/networks panel without creating a subnet

2017-08-09 Thread Lucas H. Xu
Public bug reported:

How to reproduce this:

In master(pike) horizon,

Go to Admin dashboard and networks panel

Click Create a network, give your network a name, select a project and
unclear "Create Subnet"

Click Create button and you should not be able to proceed.

See attached screenshot for more information.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2017-08-09 at 14.36.54.png"
   
https://bugs.launchpad.net/bugs/1709693/+attachment/4929666/+files/Screen%20Shot%202017-08-09%20at%2014.36.54.png

** Description changed:

  How to reproduce this:
+ 
+ In master(pike) horizon,
  
  Go to Admin dashboard and networks panel
  
  Click Create a network, give your network a name, select a project and
  unclear "Create Subnet"
  
  Click Create button and you should not be able to proceed.
  
  See attached screenshot for more information.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1709693

Title:
  Cannot create network in the admin/networks panel without creating a
  subnet

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce this:

  In master(pike) horizon,

  Go to Admin dashboard and networks panel

  Click Create a network, give your network a name, select a project and
  unclear "Create Subnet"

  Click Create button and you should not be able to proceed.

  See attached screenshot for more information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1709693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696834] Re: Intermittent "KeyError: 'allocations'" in functional tests

2017-08-09 Thread Matt Riedemann
This is likely fixed by these changes to use wsgi_intercept in our
compute API and Placement API fixtures:

https://github.com/openstack/nova/commit/fdf27abf7db233ca51f12e2926d78c272b54935b

https://github.com/openstack/nova/commit/eed6ced78776a3b9a7ada7b0c8ff74eaa376efaf

** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: nova
 Assignee: (unassigned) => Chris Dent (cdent)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1696834

Title:
  Intermittent "KeyError: 'allocations'" in functional tests

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/87/472287/1/gate/gate-nova-tox-functional-
  py35-ubuntu-xenial/79cb96d/console.html#_2017-06-08_17_42_12_495403

  2017-06-08 17:42:12.494766 | b'2017-06-08 17:38:24,561 ERROR 
[nova.compute.manager] Error updating resources for node fake-mini.'
  2017-06-08 17:42:12.494781 | b'Traceback (most recent call last):'
  2017-06-08 17:42:12.494821 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/compute/manager.py",
 line 6594, in update_available_resource_for_node'
  2017-06-08 17:42:12.494840 | b'rt.update_available_resource(context, 
nodename)'
  2017-06-08 17:42:12.494881 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/compute/resource_tracker.py",
 line 626, in update_available_resource'
  2017-06-08 17:42:12.494900 | b'
self._update_available_resource(context, resources)'
  2017-06-08 17:42:12.494946 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/.tox/functional-py35/lib/python3.5/site-packages/oslo_concurrency/lockutils.py",
 line 271, in inner'
  2017-06-08 17:42:12.494960 | b'return f(*args, **kwargs)'
  2017-06-08 17:42:12.495012 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/compute/resource_tracker.py",
 line 667, in _update_available_resource'
  2017-06-08 17:42:12.495062 | b'
self._update_usage_from_instances(context, instances, nodename)'
  2017-06-08 17:42:12.495111 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/compute/resource_tracker.py",
 line 1047, in _update_usage_from_instances'
  2017-06-08 17:42:12.495133 | b'
self._remove_deleted_instances_allocations(context, cn)'
  2017-06-08 17:42:12.495182 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/compute/resource_tracker.py",
 line 1055, in _remove_deleted_instances_allocations'
  2017-06-08 17:42:12.495194 | b'cn.uuid) or {}'
  2017-06-08 17:42:12.495234 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/scheduler/client/__init__.py",
 line 37, in __run_method'
  2017-06-08 17:42:12.495256 | b'return getattr(self.instance, 
__name)(*args, **kwargs)'
  2017-06-08 17:42:12.495294 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/scheduler/client/report.py",
 line 55, in wrapper'
  2017-06-08 17:42:12.495318 | b'return f(self, *a, **k)'
  2017-06-08 17:42:12.495362 | b'  File 
"/home/jenkins/workspace/gate-nova-tox-functional-py35-ubuntu-xenial/nova/scheduler/client/report.py",
 line 914, in get_allocations_for_resource_provider'
  2017-06-08 17:42:12.495390 | b"return resp.json()['allocations']"
  2017-06-08 17:42:12.495403 | b"KeyError: 'allocations'"

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22KeyError%5C%22%20AND%20message%3A%5C%22allocations%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20project%3A%5C%22openstack%2Fnova%5C%22=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1696834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511775] Re: Revoking a role revokes the unscoped token for a user

2017-08-09 Thread Lance Bragstad
** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone
 Assignee: Lance Bragstad (lbragstad) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1511775

Title:
  Revoking a role revokes the unscoped token for a user

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  In Juno and Kilo, when a role is revoked from a user on a project, a
  callback is triggered that invalidates all of that user's tokens.  I
  can see why we'd want to do that for scoped tokens. But by revoking
  the unscoped token as well, the user is forced to log out and log back
  in.  It seems like the unscoped token should be left alone, since
  revoking a role is an authorization change, and the unscoped token is
  an authentication issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1511775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699144] Re: Default image visibility should be configurable

2017-08-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/481794
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=9edda95f043cf5ba2cf02adf1d19ebfc413c9adf
Submitter: Jenkins
Branch:master

commit 9edda95f043cf5ba2cf02adf1d19ebfc413c9adf
Author: Ying Zuo 
Date:   Thu Jul 6 13:54:13 2017 -0700

Make default visibility option on create image modal configurable

If the user is allowed to create public images, the default visibility
option on the create image modal is public unless the setting
image_visibility is set to "private" on local_settings.py.

Closes-bug: #1699144
Change-Id: Ib1fc6c846ba3b7e2ee303749aca797b0e6707f37


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1699144

Title:
  Default image visibility should be configurable

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  On the angular create image modal, the visibility is currently default
  to public. Would be nice if this is configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1699144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511775] Re: Revoking a role revokes the unscoped token for a user

2017-08-09 Thread OpenStack Infra
** Changed in: keystone
   Status: Invalid => In Progress

** Changed in: keystone
 Assignee: (unassigned) => Lance Bragstad (lbragstad)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1511775

Title:
  Revoking a role revokes the unscoped token for a user

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  In Juno and Kilo, when a role is revoked from a user on a project, a
  callback is triggered that invalidates all of that user's tokens.  I
  can see why we'd want to do that for scoped tokens. But by revoking
  the unscoped token as well, the user is forced to log out and log back
  in.  It seems like the unscoped token should be left alone, since
  revoking a role is an authorization change, and the unscoped token is
  an authentication issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1511775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709625] [NEW] Sinkhole returns callable when accessing target attribute

2017-08-09 Thread Ken Giusti
Public bug reported:

Refer:

https://bugs.launchpad.net/neutron/+bug/1705351

And:

https://review.openstack.org/#/c/491851/

Sinkhole should simply return None when 'target' attribute is
referenced.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709625

Title:
  Sinkhole returns callable when accessing target attribute

Status in neutron:
  New

Bug description:
  Refer:

  https://bugs.launchpad.net/neutron/+bug/1705351

  And:

  https://review.openstack.org/#/c/491851/

  Sinkhole should simply return None when 'target' attribute is
  referenced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708961] Re: migration of single instance from multi-instance request spec fails with IndexError

2017-08-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/491439
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2bd7df84dcf8a45ce92b82f2360a1b39df522297
Submitter: Jenkins
Branch:master

commit 2bd7df84dcf8a45ce92b82f2360a1b39df522297
Author: Sylvain Bauza 
Date:   Mon Aug 7 13:03:37 2017 +0200

Fix migrate single instance when it was created concurrently

When a multiple-instance creation is requested by the user, we create a
single RequestSpec per instance (each of them having the corresponding
instance UUID) but we keep track of how many concurrent instances were
created at once by updating a field named 'num_instances' and we persist
it.

Unfortunately, due to Ifc5cf482209e4f6f4e3e39b24389bd3563d86444 we now
lookup that field in the scheduler for knowing how many allocations to
do. Since a move operation will only pass a single instance UUID to
migrate to the scheduler, there is a discrepancy that can lead to an
ugly IndexError. Defaulting that loop to be what is passed over RPC
unless we don't have it and then we default back to what we know from
the RequestSpec record.

Change-Id: If7da79356174be57481ef246618221e3b2ff8200
Closes-Bug: #1708961


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708961

Title:
  migration of single instance from multi-instance request spec fails
  with IndexError

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova master, as of August 6th, 2017 (head is
  5971dde5d945bcbe1e81b87d342887abd5d2eece).

  If you make multiple instances from one request:

 openstack server create --flavor c1 --image $IMAGE --nic net-
  id=$NET_ID --min 5 --max 10 x2

  and then try to migrate just one of those instances:

 nova migrate --poll x2-1

  The API generates a 500 because there's an IndexError in the
  filter_scheduler, at line 190. `num_instances` is 9 and the loop
  continues after the first allocations are claimed. On the second loop
  `num` is one, but the list of instance_uuids is only one item long, so
  IndexError.

  At line 162 where num_instances is assigned we probably need to take
  the length of instance_uuids (if it is not None) instead the
  num_instances from the spec_obj.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709032] Re: functional job tests get stuck

2017-08-09 Thread Jakub Libosvar
The reproducer is following:

kernel: 4.4.0-89-generic
conntrack: 1:1.4.3-3
conntrackd: 1:1.4.3-3

Create a conntrack entry:

sudo conntrack -I --protonum tcp --src 1.2.3.4 --sport 65535 --dst
8.8.8.8  --dport 6  --state ESTABLISHED --timeout 120


Trace from dmesg:
 [ 2964.587682] [ cut here ]
 [ 2964.53] kernel BUG at 
/build/linux-YaBj6t/linux-4.4.0/net/netfilter/nf_conntrack_extend.c:91!
 [ 2964.589954] invalid opcode:  [#1] SMP
 [ 2964.590556] Modules linked in: br_netfilter bridge openvswitch libcrc32c 
nf_conntrack_netlink nfnetlink ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 
nf_defrag_ipv6 ip6table_filter ip6_tables nls_utf8 ipt_REJECT nf_reject_ipv4 
nf_log_ipv4 nf_log_common xt_LOG xt_limit xt_tcpudp nf_conntrack_ipv4 isofs 
nf_defrag_ipv4 xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
hid_generic ppdev crct10dif_pclmul crc32_pclmul usbhid hid snd_pcsp 
ghash_clmulni_intel joydev aesni_intel snd_pcm input_leds parport_pc aes_x86_64 
i2c_piix4 snd_timer lrw evbug parport snd gf128mul 8250_fintek mac_hid 
serio_raw glue_helper soundcore ablk_helper cryptd ib_iser rdma_cm iw_cm ib_cm 
ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi 8021q garp mrp stp llc autofs4 ttm drm_kms_helper
 [ 2964.598769]  syscopyarea sysfillrect sysimgblt fb_sys_fops drm psmouse 
pata_acpi floppy
 [ 2964.599587] CPU: 0 PID: 12029 Comm: conntrack Not tainted 4.4.0-89-generic 
#112-Ubuntu
 [ 2964.600347] Hardware name: Fedora Project OpenStack Nova, BIOS 
1.9.1-5.el7_3.1 04/01/2014
 [ 2964.601178] task: 8802331b5940 ti: 8800ba5dc000 task.ti: 
8800ba5dc000
 [ 2964.602169] RIP: 0010:[]  [] 
__nf_ct_ext_add_length+0x141/0x1b0 [nf_conntrack]
 [ 2964.603408] RSP: 0018:8800ba5df9a0  EFLAGS: 00010246
 [ 2964.604043] RAX: 0009 RBX: 880234303180 RCX: 
02080020
 [ 2964.604802] RDX:  RSI: 0009 RDI: 

 [ 2964.606483] RBP: 8800ba5df9e8 R08: 88023fc1a0c0 R09: 
8800bb108560
 [ 2964.607298] R10: 8800bb108500 R11: 3a8d6867 R12: 
8800bb108500
 [ 2964.608090] R13: 8800ba5dfb58 R14: 81ef5f00 R15: 
8800ba5dfa94
 [ 2964.608883] FS:  7f4784492700() GS:88023fc0() 
knlGS:
 [ 2964.609895] CS:  0010 DS:  ES:  CR0: 80050033
 [ 2964.610542] CR2: 7f4784071520 CR3: bab56000 CR4: 
06f0
 [ 2964.611327] DR0:  DR1:  DR2: 

 [ 2964.612120] DR3:  DR6: fffe0ff0 DR7: 
0400
 [ 2964.612873] Stack:
 [ 2964.613197]  00600078 0009 880234303180 
fff4
 [ 2964.614137]  880234303180 0002 8800ba5dfb58 
81ef5f00
 [ 2964.615091]  8800ba5dfa94 8800ba5dfa70 c03a4c34 

 [ 2964.616096] Call Trace:
 [ 2964.616429]  [] ctnetlink_create_conntrack+0x244/0x4d0 
[nf_conntrack_netlink]
 [ 2964.617433]  [] ? __nf_conntrack_find_get+0x34d/0x370 
[nf_conntrack]
 [ 2964.618392]  [] ctnetlink_new_conntrack+0x44b/0x650 
[nf_conntrack_netlink]
 [ 2964.619549]  [] ? nfnetlink_net_exit_batch+0x70/0x70 
[nfnetlink]
 [ 2964.620561]  [] nfnetlink_rcv_msg+0x214/0x220 [nfnetlink]
 [ 2964.621305]  [] ? nfnetlink_net_exit_batch+0x70/0x70 
[nfnetlink]
 [ 2964.62]  [] netlink_rcv_skb+0xa4/0xc0
 [ 2964.622805]  [] nfnetlink_rcv+0x295/0x543 [nfnetlink]
 [ 2964.623517]  [] ? netlink_lookup+0xdc/0x140
 [ 2964.624179]  [] netlink_unicast+0x18a/0x240
 [ 2964.624803]  [] netlink_sendmsg+0x2fb/0x3a0
 [ 2964.625426]  [] ? aa_sock_msg_perm+0x61/0x150
 [ 2964.626158]  [] sock_sendmsg+0x38/0x50
 [ 2964.627035]  [] SYSC_sendto+0x101/0x190
 [ 2964.627651]  [] ? __do_page_fault+0x1b4/0x400
 [ 2964.628285]  [] SyS_sendto+0xe/0x10
 [ 2964.628831]  [] entry_SYSCALL_64_fastpath+0x16/0x71
 [ 2964.629523] Code: 45 89 66 24 4c 01 f3 41 29 c4 49 63 d4 48 89 df e8 b5 fd 
09 c1 48 83 c4 20 48 89 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 31 db eb ea <0f> 0b 
41 89 f6 4a 8b 04 f5 e0 0d 37 c0 48 85 c0 74 56 0f b6 70
 [ 2964.632792] RIP  [] __nf_ct_ext_add_length+0x141/0x1b0 
[nf_conntrack]
 [ 2964.633823]  RSP 
 [ 2964.634615] ---[ end trace 7116c308b790b3d4 ]---

All following conntrack commands hang indefinitely and can't be killed.


** Summary changed:

- functional job tests get stuck
+ Creating conntrack entry failure with kernel 4.4.0-89

** Project changed: neutron => linux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709032

Title:
  Creating conntrack entry failure with kernel 4.4.0-89

Status in Linux:
  Confirmed

Bug description:
  The functional job failure rate is at 100%. Every time some test gets
  stuck and job is killed after timeout.

  logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%5C
  

[Yahoo-eng-team] [Bug 1618822] Re: downgrade the exception log in update_instance_cache_with_nw_info

2017-08-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/363585
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ef85c99f5e5ba091b6711d76c88d50aa2dae6989
Submitter: Jenkins
Branch:master

commit ef85c99f5e5ba091b6711d76c88d50aa2dae6989
Author: jichenjc 
Date:   Sun Jul 31 21:42:01 2016 +0800

no instance info cache update if instance deleted

Avoid instance info cache update if the instance is deleted,
as the deleted instance's info cache is already deleted in
db layer and it will report InstanceInfoCacheNotFound
exception and lead to a few exception logs in compute.

Please note this is different to regular info cache and
lead to InstanceInfoCacheNotFound exception, it's in the
case when you first create the info cache after instance
delete and InstanceInfo is also deleted.

Change-Id: I860e9e7c7ef458722135a21c6c5745f5519c56c4
Closes-Bug: 1618822


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1618822

Title:
  downgrade the exception log in update_instance_cache_with_nw_info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  when compute node is down, nova api can accept delete action and delete the 
instance
  then during compute start up, periodic task will check whether the instance 
is deleted in api layer
   then performance actions

  saw those logs because actually the info case is already deleted so
  it's a valid case and should not report this exception ,just a log
  info should be fine

  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] Traceback (most recent call last):
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/nova/network/base_api.py", line 50, in 
update_instance_cache_with_nw_info
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] ic.save(update_cells=update_cells)
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 197, in 
wrapper
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] ctxt, self, fn.__name__, args, kwargs)
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 242, in 
object_action
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in 
call
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] retry=self.retry)
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] timeout=timeout, retry=retry)
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
431, in send
  ages/oslo_messaging/_drivers/amqpdriver.py", line 431, in send
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] retry=retry)
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
422, in _send
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] raise result
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] InstanceInfoCacheNotFound_Remote: Info 
cache for instance d6a78566-0f7d-4173-b35a-b45d2054ba71 could not be found.
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71] Traceback (most recent call last):
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api [instance: 
d6a78566-0f7d-4173-b35a-b45d2054ba71]
  2016-08-31 10:13:56.878 25191 ERROR nova.network.base_api 

[Yahoo-eng-team] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-08-09 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:8.4.0-0ubuntu4

---
neutron (2:8.4.0-0ubuntu4) xenial; urgency=medium

  * d/p/Update-the-host_id-for-network-router_gateway-interf.patch:
keep the router's gateway interface updated when keepalived
fails over (LP: #1694337).

 -- Felipe Reyes   Tue, 25 Jul 2017 17:50:16
+0100

** Changed in: neutron (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-08-09 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:10.0.2-0ubuntu1.1

---
neutron (2:10.0.2-0ubuntu1.1) zesty; urgency=medium

  * d/p/Update-the-host_id-for-network-router_gateway-interf.patch:
keep the router's gateway interface updated when keepalived fails
over (LP: #1694337).

 -- Felipe Reyes   Wed, 21 Jun 2017 18:01:36
-0400

** Changed in: neutron (Ubuntu Zesty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708316] Re: Nova send wrong information when there are several networks which have same name and VM uses more than one of them

2017-08-09 Thread Maciej Kucia
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708316

Title:
  Nova send wrong information when there are several networks which have
  same name and VM uses more than one of them

Status in neutron:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Nova send wrong information when there are
  several networks which have same name and
  VM uses more than one of them

  Steps to reproduce
  ==
  1. Create two networks which have same name
  2. Create VM with the networks created in 1st step.
  3. Check the VM using "nova show "

  Expected result
  ===
  ...
  | tenant_id| 92f3ea23c5b84fd69b56583f322d213e 
|
  | testnet1 network | 192.168.0.12 
|
  | testnet1 network | 192.168.1.4  
|
  | updated  | 2017-07-31T14:33:49Z 
|
  ...

  Actual result
  =
  ...
  | tenant_id| 92f3ea23c5b84fd69b56583f322d213e 
|
  | testnet1 network | 192.168.0.12, 192.168.1.4
|
  | updated  | 2017-07-31T14:33:49Z 
|
  ...

  Environment
  ===
  1. Openstack Version : I tested this using Mitaka & Ocata
  2. Network : Neutron with LinuxBridge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709594] [NEW] live-migration without '--block-migrate" failed with "No sql_connection parameter is established"

2017-08-09 Thread Jianghua Wang
Public bug reported:

The test in on XenServer.
nova  live-migration 

If we run the live-migration without the option of "--block-migrate", it will 
failed with error as:
   RemoteError: Remote error: RemoteError Remote error: CantStartEngineError No 
sql_connection parameter is established"

- The trace for nova-conductor:
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File "/opt/stack/nova/nova/conductor/tasks/base.py", 
line 42, in execute
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager return self._execute()
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/opt/stack/nova/nova/conductor/tasks/live_migrate.py", line 56, in _execute
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager self._check_requested_destination()
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/opt/stack/nova/nova/conductor/tasks/live_migrate.py", line 96, in 
_check_requested_destination
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager self._call_livem_checks_on_host(self.destination)
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/opt/stack/nova/nova/conductor/tasks/live_migrate.py", line 147, in 
_call_livem_checks_on_host
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager destination, self.block_migration, 
self.disk_over_commit)
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File "/opt/stack/nova/nova/compute/rpcapi.py", line 
479, in check_can_live_migrate_destination
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager disk_over_commit=disk_over_commit)
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
169, in call
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager retry=self.retry)
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 123, 
in _send
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager timeout=timeout, retry=retry)
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 578, in send
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager retry=retry)
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 569, in _send
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager raise result
Aug 09 03:26:36 DevStackOSDomU nova-conductor[1753]: ERROR 
nova.conductor.manager RemoteError: Remote error: RemoteError Remote error: 
CantStartEngineError No sql_connection parameter is established

- The trace from nova-compute:
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 211, in decorated_function
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server self.force_reraise()
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 199, in decorated_function
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 5253, in check_can_live_migrate_destination
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server disk_over_commit)
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 5264, in _do_check_can_live_migrate_destination
Aug 09 03:26:17 DevStackOSDomU nova-compute[1762]: ERROR 

[Yahoo-eng-team] [Bug 1406333] Re: LOG messages localized, shouldn't be

2017-08-09 Thread Akihiro Motoki
woops... neutron-fwaas-dashboard is not affected.

** No longer affects: neutron-fwaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1406333

Title:
  LOG messages localized, shouldn't be

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Neutron VPNaaS dashboard:
  In Progress

Bug description:
  LOG messages should not be localized. There are a few places in
  project/firewalls/forms.py that they are. These instances should be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1406333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406333] Re: LOG messages localized, shouldn't be

2017-08-09 Thread Akihiro Motoki
neutron-fwaas-dashboard and neutron-vpnaas-dashboard are affected. They
were split out from horizon after the fix for this bug was merged, but
unfortunately the split out codes were prepared before the fix was
merged. Needs to take care of it.

** Also affects: neutron-fwaas-dashboard
   Importance: Undecided
   Status: New

** Also affects: neutron-vpnaas-dashboard
   Importance: Undecided
   Status: New

** Changed in: neutron-vpnaas-dashboard
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Changed in: neutron-fwaas-dashboard
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

** Changed in: neutron-fwaas-dashboard
   Importance: Undecided => High

** Changed in: neutron-vpnaas-dashboard
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1406333

Title:
  LOG messages localized, shouldn't be

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Neutron VPNaaS dashboard:
  In Progress

Bug description:
  LOG messages should not be localized. There are a few places in
  project/firewalls/forms.py that they are. These instances should be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1406333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709550] [NEW] nova-compute doesn't start if there is difference between current compute driver and driver which was used to create instance

2017-08-09 Thread Herodotus
Public bug reported:

Steps to reproduce
==
1. Create instance with (for example) qemu as nova-compute backend.
2. Change nova-compute backend to (for example) lxd.
3. Restart nova-compute service

Expected result
===
I expected to see an error with something like this: "You have to delete all 
your instances, which were created with old nova-compute driver. Use 'openstack 
server delete instance-name' on your controller node."

Actual result
=
nova-compute service doesn't start and there is no clear explanation in 
nova-compute.log (see log below).

Environment
===
1. Version of OpenStack is Ocata:
user@compute ~> dpkg -l | grep nova
rc  nova-api   2:15.0.2-0ubuntu1~cloud0 
all  OpenStack Compute - API frontend
ii  nova-common2:15.0.5-0ubuntu1~cloud0 
all  OpenStack Compute - common files
ii  nova-compute   2:15.0.5-0ubuntu1~cloud0 
all  OpenStack Compute - compute node base
rc  nova-compute-kvm   2:15.0.5-0ubuntu1~cloud0 
all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt   2:15.0.5-0ubuntu1~cloud0 
all  OpenStack Compute - compute node libvirt support
ii  nova-compute-lxd   15.0.2-0ubuntu1~cloud0   
all  Openstack Compute - LXD container hypervisor support
rc  nova-conductor 2:15.0.2-0ubuntu1~cloud0 
all  OpenStack Compute - conductor service
rc  nova-consoleauth   2:15.0.2-0ubuntu1~cloud0 
all  OpenStack Compute - Console Authenticator
rc  nova-novncproxy2:15.0.2-0ubuntu1~cloud0 
all  OpenStack Compute - NoVNC proxy
rc  nova-placement-api 2:15.0.2-0ubuntu1~cloud0 
all  OpenStack Compute - placement API frontend
rc  nova-scheduler 2:15.0.2-0ubuntu1~cloud0 
all  OpenStack Compute - virtual machine scheduler
ii  python-nova2:15.0.5-0ubuntu1~cloud0 
all  OpenStack Compute Python libraries
ii  python-nova-lxd15.0.2-0ubuntu1~cloud0   
all  OpenStack Compute Python libraries - LXD driver
ii  python-novaclient  2:7.1.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API - Python 2.7


2. Hypervisors: qemu and lxd

3. Storage: lvm

4. Networking type: Neutron with OpenVSwitch

Log
==
nova-compute.log:
2017-08-08 16:18:51.112 29592 INFO nova.service [-] Starting compute node 
(version 15.0.5)
2017-08-08 16:18:51.882 29592 INFO oslo.privsep.daemon 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Running privsep helper: 
['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', 
'--config-file', '/etc/nova/nova.conf', '--config-file', 
'/etc/nova/nova-compute.conf', '--privsep_context', 
'vif_plug_linux_bridge.privsep.vif_plug', '--privsep_sock_path', 
'/tmp/tmpGK77AD/privsep.sock']
2017-08-08 16:18:53.810 29592 INFO oslo.privsep.daemon 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Spawned new privsep daemon 
via rootwrap
2017-08-08 16:18:53.812 29592 INFO oslo.privsep.daemon [-] privsep daemon 
starting
2017-08-08 16:18:53.812 29592 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
2017-08-08 16:18:53.813 29592 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): CAP_NET_ADMIN/CAP_NET_ADMIN/none
2017-08-08 16:18:53.813 29592 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 29634
2017-08-08 16:18:53.957 29592 INFO os_vif 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Successfully plugged vif 
VIFBridge(active=True,address=fa:16:3e:ae:ba:06,bridge_name='brq3b3f3c75-8f',has_traffic_filtering=True,id=4a1e4eaa-17d3-4baa-a2c5-9f6a7369272c,network=Network(3b3f3c75-8f2b-4fe0-b91e-9c3dd53fe9ec),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap4a1e4eaa-17')
2017-08-08 16:18:54.310 29592 INFO os_vif 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Successfully plugged vif 
VIFBridge(active=True,address=fa:16:3e:7e:a4:a1,bridge_name='brq3b3f3c75-8f',has_traffic_filtering=True,id=ae4b3102-ad58-474c-bfbf-da6b083eda9b,network=Network(3b3f3c75-8f2b-4fe0-b91e-9c3dd53fe9ec),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tapae4b3102-ad')
2017-08-08 16:18:54.408 29592 ERROR oslo_service.service 
[req-0777ab5e-8b64-4631-8690-3dcf94ab2118 - - - - -] Error starting thread.
2017-08-08 16:18:54.408 29592 ERROR oslo_service.service Traceback 

[Yahoo-eng-team] [Bug 1709547] [NEW] fullstack ovsfw tests aren't executed

2017-08-09 Thread IWAMOTO Toshihiro
Public bug reported:

fullstack issues "ovs-ofctl add-flow br-test-foo
ct_state=+trk,actions=drop" to confirm conntrack support but results in:

http://logs.openstack.org/18/489918/4/check/gate-neutron-dsvm-fullstack-
ubuntu-xenial/f9aa19e/console.html#_2017-08-09_05_24_52_006295

Stderr: sudo: no tty present and no askpass program specified

As a result, the tests are skipped.

2017-08-09 05:24:52.210160 | 2017-08-09 05:24:52.209 | {3}
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup
(ovs-openflow-cli_ovsdb-cli) ... SKIPPED: Open vSwitch firewall_driver
doesn't work with this version of ovs.

It seems ovs-ofctl should be added to the sudoers list.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack ovs-fw

** Description changed:

  fullstack issues "ovs-ofctl add-flow br-test-foo
  ct_state=+trk,actions=drop" to confirm conntrack support but results in:
  
- 
http://logs.openstack.org/18/489918/4/check/gate-neutron-dsvm-fullstack-ubuntu-
- xenial/f9aa19e/console.html#_2017-08-09_05_24_52_006295
+ http://logs.openstack.org/18/489918/4/check/gate-neutron-dsvm-fullstack-
+ ubuntu-xenial/f9aa19e/console.html#_2017-08-09_05_24_52_006295
  
  Stderr: sudo: no tty present and no askpass program specified
  
  As a result, the tests are skipped.
  
  2017-08-09 05:24:52.210160 | 2017-08-09 05:24:52.209 | {3}
  
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup
  (ovs-openflow-cli_ovsdb-cli) ... SKIPPED: Open vSwitch firewall_driver
  doesn't work with this version of ovs.
  
  It seems ovs-ofctl should be added to the sudoers list.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709547

Title:
  fullstack ovsfw tests aren't executed

Status in neutron:
  New

Bug description:
  fullstack issues "ovs-ofctl add-flow br-test-foo
  ct_state=+trk,actions=drop" to confirm conntrack support but results
  in:

  http://logs.openstack.org/18/489918/4/check/gate-neutron-dsvm-
  fullstack-ubuntu-
  xenial/f9aa19e/console.html#_2017-08-09_05_24_52_006295

  Stderr: sudo: no tty present and no askpass program specified

  As a result, the tests are skipped.

  2017-08-09 05:24:52.210160 | 2017-08-09 05:24:52.209 | {3}
  
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup
  (ovs-openflow-cli_ovsdb-cli) ... SKIPPED: Open vSwitch firewall_driver
  doesn't work with this version of ovs.

  It seems ovs-ofctl should be added to the sudoers list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp