[Yahoo-eng-team] [Bug 1698058] [NEW] L3DvrTestCase.test_dvr_gateway_host_binding_is_set: MismatchError: u'host0' != u'standby'

2017-06-14 Thread IWAMOTO Toshihiro
Public bug reported:

L3DvrTestCase.test_dvr_gateway_host_binding_is_set is recently added in commit 
abe99383.
It seems to fail often with MismatchError: u'host0' != u'standby'.

Example:
http://logs.openstack.org/63/470063/5/check/gate-neutron-dsvm-functional-ubuntu-xenial/4063789/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1698058

Title:
  L3DvrTestCase.test_dvr_gateway_host_binding_is_set: MismatchError:
  u'host0' != u'standby'

Status in neutron:
  New

Bug description:
  L3DvrTestCase.test_dvr_gateway_host_binding_is_set is recently added in 
commit abe99383.
  It seems to fail often with MismatchError: u'host0' != u'standby'.

  Example:
  
http://logs.openstack.org/63/470063/5/check/gate-neutron-dsvm-functional-ubuntu-xenial/4063789/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1698058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698046] [NEW] Add support for ingress bandwidth limit rules in ovs agent

2017-06-14 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/457816
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 2d0d1a2d76c3383bd1e2d14e8860824d843f5047
Author: Sławek Kapłoński 
Date:   Wed Apr 19 00:40:38 2017 +0200

Add support for ingress bandwidth limit rules in ovs agent

Add support for QoS ingress bandwidth limiting in
openvswitch agent.
It uses default ovs QoS policies on bandwidth limiting
mechanism.

DocImpact: Ingress bandwidth limit in QoS supported by
   Openvswitch agent

Change-Id: I9d94e27db5d574b61061689dc99f12f095625ca0
Partial-Bug: #1560961

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1698046

Title:
  Add support for ingress bandwidth limit rules in ovs agent

Status in neutron:
  New

Bug description:
  https://review.openstack.org/457816
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2d0d1a2d76c3383bd1e2d14e8860824d843f5047
  Author: Sławek Kapłoński 
  Date:   Wed Apr 19 00:40:38 2017 +0200

  Add support for ingress bandwidth limit rules in ovs agent
  
  Add support for QoS ingress bandwidth limiting in
  openvswitch agent.
  It uses default ovs QoS policies on bandwidth limiting
  mechanism.
  
  DocImpact: Ingress bandwidth limit in QoS supported by
 Openvswitch agent
  
  Change-Id: I9d94e27db5d574b61061689dc99f12f095625ca0
  Partial-Bug: #1560961

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1698046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275967] Re: devstack leaves stale .pyc files around

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275967

Title:
  devstack leaves stale .pyc files around

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  nova was recently ported to oslo.incubator.  This results in devstack
  raising an exception about duplicate rpc_backend options being
  registered.  The problem is that openstack/common/service.py was not
  updated as part of that work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1275967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277507] Re: "ImportError: No module named passlib.hash"; HTTP error 403 while getting ipaddr from googledrive.com

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1277507

Title:
  "ImportError: No module named passlib.hash"; HTTP error 403 while
  getting ipaddr from googledrive.com

Status in devstack:
  Invalid
Status in gantt:
  New
Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Core Infrastructure:
  Invalid
Status in oslo-incubator:
  Invalid
Status in python-keystoneclient:
  Invalid
Status in zaqar:
  Invalid

Bug description:
  Example from http://logs.openstack.org/99/69799/9/gate/gate-tempest-
  dsvm-full/ca0c1b3/console.html

  2014-02-06 20:20:45.531 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/bin/keystone-manage", line 37, in 
  2014-02-06 20:20:45.532 | 2014-02-06 20:20:45 from keystone import cli
  2014-02-06 20:20:45.534 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/keystone/cli.py", line 26, in 
  2014-02-06 20:20:45.535 | 2014-02-06 20:20:45 from keystone.common import 
openssl
  2014-02-06 20:20:45.537 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/keystone/common/openssl.py", line 21, in 
  2014-02-06 20:20:45.539 | 2014-02-06 20:20:45 from keystone.common import 
utils
  2014-02-06 20:20:45.540 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/keystone/common/utils.py", line 28, in 
  2014-02-06 20:20:45.543 | 2014-02-06 20:20:45 import passlib.hash
  2014-02-06 20:20:45.543 | 2014-02-06 20:20:45 ImportError: No module named 
passlib.hash
  2014-02-06 20:20:45.545 | 2014-02-06 20:20:45 + [[ PKI == \P\K\I ]]
  2014-02-06 20:20:45.571 | 2014-02-06 20:20:45 + rm -rf /etc/keystone/ssl
  2014-02-06 20:20:45.572 | 2014-02-06 20:20:45 + 
/opt/stack/new/keystone/bin/keystone-manage pki_setup
  2014-02-06 20:20:45.573 | 2014-02-06 20:20:45 Traceback (most recent call 
last):
  2014-02-06 20:20:45.573 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/bin/keystone-manage", line 37, in 
  2014-02-06 20:20:45.573 | 2014-02-06 20:20:45 from keystone import cli
  2014-02-06 20:20:45.574 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/keystone/cli.py", line 26, in 
  2014-02-06 20:20:45.574 | 2014-02-06 20:20:45 from keystone.common import 
openssl
  2014-02-06 20:20:45.574 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/keystone/common/openssl.py", line 21, in 
  2014-02-06 20:20:45.575 | 2014-02-06 20:20:45 from keystone.common import 
utils
  2014-02-06 20:20:45.577 | 2014-02-06 20:20:45   File 
"/opt/stack/new/keystone/keystone/common/utils.py", line 28, in 
  2014-02-06 20:20:45.577 | 2014-02-06 20:20:45 import passlib.hash
  2014-02-06 20:20:45.578 | 2014-02-06 20:20:45 ImportError: No module named 
passlib.hash

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1277507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1695101] Re: DVR Router ports and gateway ports are not bound to any host and no snat namespace created

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/470063
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=abe9938367f3f7cf21e5c0e42ee7b9b81b4960b0
Submitter: Jenkins
Branch:master

commit abe9938367f3f7cf21e5c0e42ee7b9b81b4960b0
Author: Swaminathan Vasudevan 
Date:   Thu Jun 1 15:49:38 2017 -0700

DVR: Fix DVR Router snat ports and gateway ports host binding issue

DVR snat ports and gateway ports are not bound to any host
and so we don't see the snat namespace getting created.

The issue is the _build_routers_list in l3_dvr_db.py is not called due
to the inheritance order.

Change-Id: I56f9de31524aeef262cf2a78be3abf8487c21a12
Closes-Bug: #1695101


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1695101

Title:
  DVR Router ports and gateway ports are not bound to any host and no
  snat namespace created

Status in neutron:
  Fix Released

Bug description:
  In the Pike cycle there were some refactoring to the DVR db classes and 
resource handler mixin. 
  This lead to the regression where it was not creating the SNAT namespace for 
the DVR routers if it has gateway configured.

  The only namespace seen was the fipnamespace.

  This was the patch set that caused the regression.
  https://review.openstack.org/#/c/457592/5

  On further debugging it was found that the snat ports and the
  distributed router ports were not host bound. The neutron was trying
  to bind it to a 'null' host.

  The '_build_routers_list' function in the l3_dvr_db.py was not called
  and hence the host binding was missing.

  We have seen a similar issue a while back, #1369012 (Fix KeyError on
  missing gw_port_host for L3 agent in DVR mode

  The issue here is the order of inheritance of the classes. If the
  order of inheritance of the classes are messed up, then the functions
  that are over-ridden are not called in the right order or skipped.

  So with this we have seen the same problem, where the
  '_build_routers_list' in the l3_db_gwmode.py was called and not the
  one in the 'l3_dvr_db.py' file.

  This is the current order of inheritance.

  class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin,
 l3_attrs_db.ExtraAttributesMixin,
 DVRResourceOperationHandler,
_DVRAgentInterfaceMixin):

  If the order is shuffled, it works fine and here is the shuffled
  order.

  class L3_NAT_with_dvr_db_mixin(DVRResourceOperationHandler,
 _DVRAgentInterfaceMixin,
 l3_attrs_db.ExtraAttributesMixin,
 l3_db.L3_NAT_db_mixin):

  This seems to fix the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1695101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479962] Re: Use extras for deployment-specific package requirements

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1479962

Title:
  Use extras for deployment-specific package requirements

Status in devstack:
  Invalid
Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  
  Keystone should use "extras" in setup.cfg for deployment-specific package 
requirements.

  One example is ldap.

  With this change, deployers can do something like `pip install
  keystone[ldap]` to install the packages required for ldap.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1479962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698010] [NEW] dhcp-domain is deprecated, but required for correct FQDN behavior

2017-06-14 Thread Ben Nemec
Public bug reported:

There seems to be an issue with how domains get assigned when booting
instances.  My understanding is that with neutron, the neutron
dns_domain option should be what determines the resulting domain name of
the instances.  However, when creating instances with the following
configuration:

(undercloud) [centos@undercloud-test ~]$ sudo grep dns_domain 
/etc/neutron/neutron.conf 
#dns_domain = openstacklocal
dns_domain=nemebean.com
(undercloud) [centos@undercloud-test ~]$ sudo grep dhcp_domain 
/etc/nova/nova.conf 
#dhcp_domain=novalocal
dhcp_domain=

I get the following in the instance:

[heat-admin@overcloud-controller-0 ~]$ sudo hostnamectl
   Static hostname: overcloud-controller-0.localdomain

It looks like this is being done by cloud-init:

Jun 14 21:07:34 host-9-1-1-12 cloud-init[1405]: [CLOUDINIT] 
cc_set_hostname.py[DEBUG]: Setting the hostname to 
overcloud-controller-0.localdomain (overcloud-controller-0)
Jun 14 21:07:34 host-9-1-1-12 cloud-init[1405]: [CLOUDINIT] util.py[DEBUG]: 
Running command ['hostnamectl', 'set-hostname', 
'overcloud-controller-0.localdomain'] with allowed return codes [0] 
(shell=False, capture=True)

So cloud-init is likely getting the host and domain name from Nova
metadata, even though Neutron is being used to manage networking.

If I also set dhcp_domain as follows:

(undercloud) [centos@undercloud-test ~]$ sudo grep dhcp_domain 
/etc/nova/nova.conf 
#dhcp_domain=novalocal
dhcp_domain=nemebean.com

Then I get the expected results:

[heat-admin@overcloud-controller-0 ~]$ sudo hostnamectl
   Static hostname: overcloud-controller-0.nemebean.com

These are obviously tripleo overcloud instances being deployed via
Ironic.  I'm using some recent RDO packages:

$ sudo rpm -qa | grep nova
openstack-nova-conductor-16.0.0-0.20170521033533.99bd334.el7.centos.noarch
python-nova-16.0.0-0.20170521033533.99bd334.el7.centos.noarch
puppet-nova-11.1.0-0.20170605232112.27baec7.el7.centos.noarch
openstack-nova-common-16.0.0-0.20170521033533.99bd334.el7.centos.noarch
python2-novaclient-8.0.0-0.20170517113627.e1b9e76.el7.centos.noarch
openstack-nova-placement-api-16.0.0-0.20170521033533.99bd334.el7.centos.noarch
openstack-nova-api-16.0.0-0.20170521033533.99bd334.el7.centos.noarch
openstack-nova-scheduler-16.0.0-0.20170521033533.99bd334.el7.centos.noarch
openstack-nova-compute-16.0.0-0.20170521033533.99bd334.el7.centos.noarch

99bd334 is the short sha of the commit the packages were built against

$ sudo rpm -qa | grep neutron
python-neutron-11.0.0-0.20170521040619.3f2e22a.el7.centos.noarch
openstack-neutron-ml2-11.0.0-0.20170521040619.3f2e22a.el7.centos.noarch
python2-neutronclient-6.2.0-0.20170418195232.06d3dfd.el7.centos.noarch
openstack-neutron-11.0.0-0.20170521040619.3f2e22a.el7.centos.noarch
openstack-neutron-openvswitch-11.0.0-0.20170521040619.3f2e22a.el7.centos.noarch
puppet-neutron-11.1.0-0.20170601210926.888c480.el7.centos.noarch
python-neutron-lib-1.6.0-0.20170503061451.449f079.el7.centos.noarch
openstack-neutron-common-11.0.0-0.20170521040619.3f2e22a.el7.centos.noarch

This is not ideal in any case, but it's particularly concerning since
according to the opt docs dhcp_domain is deprecated.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1698010

Title:
  dhcp-domain is deprecated, but required for correct FQDN behavior

Status in OpenStack Compute (nova):
  New

Bug description:
  There seems to be an issue with how domains get assigned when booting
  instances.  My understanding is that with neutron, the neutron
  dns_domain option should be what determines the resulting domain name
  of the instances.  However, when creating instances with the following
  configuration:

  (undercloud) [centos@undercloud-test ~]$ sudo grep dns_domain 
/etc/neutron/neutron.conf 
  #dns_domain = openstacklocal
  dns_domain=nemebean.com
  (undercloud) [centos@undercloud-test ~]$ sudo grep dhcp_domain 
/etc/nova/nova.conf 
  #dhcp_domain=novalocal
  dhcp_domain=

  I get the following in the instance:

  [heat-admin@overcloud-controller-0 ~]$ sudo hostnamectl
 Static hostname: overcloud-controller-0.localdomain

  It looks like this is being done by cloud-init:

  Jun 14 21:07:34 host-9-1-1-12 cloud-init[1405]: [CLOUDINIT] 
cc_set_hostname.py[DEBUG]: Setting the hostname to 
overcloud-controller-0.localdomain (overcloud-controller-0)
  Jun 14 21:07:34 host-9-1-1-12 cloud-init[1405]: [CLOUDINIT] util.py[DEBUG]: 
Running command ['hostnamectl', 'set-hostname', 
'overcloud-controller-0.localdomain'] with allowed return codes [0] 
(shell=False, capture=True)

  So cloud-init is likely getting the host and domain name from Nova
  metadata, even though Neutron is being used to manage networking.

  If I also set dhcp_domain as follows:

  (undercloud) [centos@undercloud-test ~]$ sudo grep 

[Yahoo-eng-team] [Bug 1345947] Re: DHCPNAK after neutron-dhcp-agent restart

2017-06-14 Thread Sean Dague
This grenade bug was last updated over 180 days ago, as grenade
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: grenade
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1345947

Title:
  DHCPNAK after neutron-dhcp-agent restart

Status in grenade:
  Invalid
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  After rolling out a configuration change, we restarted neutron-dhcp-agent 
service, and then dnsmasq logs start flooding: DHCPNAK ... lease not found.
  DHCPNAK is replied by dnsmasq for all DHCPREQUEST renews from all VMs. 
However the MAC and IP pairs exist in host files.
  The log flooding increases when more and more VMs start renewing and they 
keep retrying until IP expire and send DHCPDISCOVER and reinit the IP.
  The log flooding gradually disappears when the VMs IP expire and send 
DHCPDISCOVER, to which dnsmasq respond DHCPOFFER properly.

  Analysis:  
  I noticed that option --leasefile-ro is used in dnsmasq command when started 
by neutron dhcp-agent. According to dnsmasq manual, this option should be used 
together with --dhcp-script to customize the lease database. However, the 
option --dhcp-script was removed when fixing bug 1202392.
  Because of this, dnsmasq will not save lease information in persistent 
storage, and when it is restarted, lease information is lost.

  Solution:
  Simply replace --leasefile-ro by --dhcp-leasefile=/lease would solve the problem. (patch attached)

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1345947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645263] Re: Unable to run stack.sh on fresh new Ubuntu Xenial 16.04 LTS, script fails with "No module named 'memcache' "

2017-06-14 Thread Sean Dague
** Changed in: devstack
   Status: Incomplete => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645263

Title:
  Unable to run stack.sh on fresh new Ubuntu Xenial 16.04 LTS, script
  fails with "No module named 'memcache' "

Status in devstack:
  Opinion
Status in OpenStack Identity (keystone):
  Incomplete
Status in OpenStack Global Requirements:
  Opinion

Bug description:
  Unable to run stack.sh on fresh new Ubuntu Xenial 16.04 LTS, script
  fails with "No module named 'memcache' "

  Traceback:

  +lib/keystone:bootstrap_keystone:630   /usr/local/bin/keystone-manage 
bootstrap --bootstrap-username admin --bootstrap-password ubuntu 
--bootstrap-project-name admin --bootstrap-role-name admin 
--bootstrap-service-name keystone --bootstrap-region-id RegionOne 
--bootstrap-admin-url http://192.168.0.115/identity_admin 
--bootstrap-public-url http://192.168.0.115/identity --bootstrap-internal-url 
http://192.168.0.115/identity
  2016-11-28 11:51:39.723 15663 CRITICAL keystone [-] ImportError: No module 
named 'memcache'
  2016-11-28 11:51:39.723 15663 TRACE keystone Traceback (most recent call 
last):
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/bin/keystone-manage", line 10, in 
  2016-11-28 11:51:39.723 15663 TRACE keystone sys.exit(main())
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/manage.py", line 45, in main
  2016-11-28 11:51:39.723 15663 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 1269, in main
  2016-11-28 11:51:39.723 15663 TRACE keystone CONF.command.cmd_class.main()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 365, in main
  2016-11-28 11:51:39.723 15663 TRACE keystone klass = cls()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 66, in __init__
  2016-11-28 11:51:39.723 15663 TRACE keystone self.load_backends()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 129, in load_backends
  2016-11-28 11:51:39.723 15663 TRACE keystone drivers = 
backends.load_backends()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/server/backends.py", line 32, in load_backends
  2016-11-28 11:51:39.723 15663 TRACE keystone cache.configure_cache()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/common/cache/core.py", line 124, in 
configure_cache
  2016-11-28 11:51:39.723 15663 TRACE keystone 
cache.configure_cache_region(CONF, region)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/oslo_cache/core.py", line 201, in 
configure_cache_region
  2016-11-28 11:51:39.723 15663 TRACE keystone '%s.' % 
conf.cache.config_prefix)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/region.py", line 552, in 
configure_from_config
  2016-11-28 11:51:39.723 15663 TRACE keystone "%swrap" % prefix, None),
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/region.py", line 417, in 
configure
  2016-11-28 11:51:39.723 15663 TRACE keystone _config_prefix
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/api.py", line 81, in 
from_config_dict
  2016-11-28 11:51:39.723 15663 TRACE keystone for key in config_dict
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 208, in __init__
  2016-11-28 11:51:39.723 15663 TRACE keystone super(MemcacheArgs, 
self).__init__(arguments)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 108, in __init__
  2016-11-28 11:51:39.723 15663 TRACE keystone self._imports()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 287, in _imports
  2016-11-28 11:51:39.723 15663 TRACE keystone import memcache  # noqa
  2016-11-28 11:51:39.723 15663 TRACE keystone ImportError: No module named 
'memcache'
  2016-11-28 11:51:39.723 15663 TRACE keystone 

  local.conf

  [[local|localrc]]

  USE_PYTHON3=True
  PYTHON3_VERSION=3.5

  Python: 3.5.2

  Ubuntu version (lsb_release -a):
  Distributor ID:   Ubuntu
  Description:  Ubuntu 16.04 LTS
  Release:  16.04
  Codename: xenial

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1645263/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1698000] [NEW] api-ref: GET /os-hypervisors/{hypervisor_hostname_pattern}/search response parameter hypervisor_hostname description is wrong

2017-06-14 Thread Matt Riedemann
Public bug reported:

The docs say:

https://developer.openstack.org/api-ref/compute/?expanded=search-
hypervisor-detail#search-hypervisor

"The hypervisor host name provided by the Nova virt driver. For the
Ironic driver, it is the Ironic node name."

However, for Ironic, the hypervisor_hostname is the Ironic node uuid,
not the name:

https://github.com/openstack/nova/blob/b94b02b4503cf7eded3fafb84c436395d4beb6ec/nova/virt/ironic/driver.py#L332

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1698000

Title:
  api-ref: GET /os-hypervisors/{hypervisor_hostname_pattern}/search
  response parameter hypervisor_hostname description is wrong

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  The docs say:

  https://developer.openstack.org/api-ref/compute/?expanded=search-
  hypervisor-detail#search-hypervisor

  "The hypervisor host name provided by the Nova virt driver. For the
  Ironic driver, it is the Ironic node name."

  However, for Ironic, the hypervisor_hostname is the Ironic node uuid,
  not the name:

  
https://github.com/openstack/nova/blob/b94b02b4503cf7eded3fafb84c436395d4beb6ec/nova/virt/ironic/driver.py#L332

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1698000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244457] Re: ServiceCatalogException: Invalid service catalog service: compute

2017-06-14 Thread Sean Dague
This grenade bug was last updated over 180 days ago, as grenade
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: grenade
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1244457

Title:
  ServiceCatalogException: Invalid service catalog service: compute

Status in grenade:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  On the following review - https://review.openstack.org/#/c/53712/

  We failed the tempest tests on the dashboard scenario tests for the pg 
version of the job: 
  2013-10-24 21:26:00.445 | 
==
  2013-10-24 21:26:00.445 | FAIL: 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
--
  2013-10-24 21:26:00.446 | _StringException: Empty attachments:
  2013-10-24 21:26:00.446 |   pythonlogging:''
  2013-10-24 21:26:00.446 |   stderr
  2013-10-24 21:26:00.446 |   stdout
  2013-10-24 21:26:00.446 | 
  2013-10-24 21:26:00.446 | Traceback (most recent call last):
  2013-10-24 21:26:00.446 |   File 
"tempest/scenario/test_dashboard_basic_ops.py", line 73, in test_basic_scenario
  2013-10-24 21:26:00.447 | self.user_login()
  2013-10-24 21:26:00.447 |   File 
"tempest/scenario/test_dashboard_basic_ops.py", line 64, in user_login
  2013-10-24 21:26:00.447 | self.opener.open(req, urllib.urlencode(params))
  2013-10-24 21:26:00.447 |   File "/usr/lib/python2.7/urllib2.py", line 406, 
in open
  2013-10-24 21:26:00.447 | response = meth(req, response)
  2013-10-24 21:26:00.447 |   File "/usr/lib/python2.7/urllib2.py", line 519, 
in http_response
  2013-10-24 21:26:00.447 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 438, 
in error
  2013-10-24 21:26:00.448 | result = self._call_chain(*args)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-10-24 21:26:00.448 | result = func(*args)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 625, 
in http_error_302
  2013-10-24 21:26:00.448 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 406, 
in open
  2013-10-24 21:26:00.449 | response = meth(req, response)
  2013-10-24 21:26:00.449 |   File "/usr/lib/python2.7/urllib2.py", line 519, 
in http_response
  2013-10-24 21:26:00.449 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.449 |   File "/usr/lib/python2.7/urllib2.py", line 438, 
in error
  2013-10-24 21:26:00.449 | result = self._call_chain(*args)
  2013-10-24 21:26:00.449 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-10-24 21:26:00.449 | result = func(*args)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 625, 
in http_error_302
  2013-10-24 21:26:00.450 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 406, 
in open
  2013-10-24 21:26:00.450 | response = meth(req, response)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 519, 
in http_response
  2013-10-24 21:26:00.450 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 444, 
in error
  2013-10-24 21:26:00.451 | return self._call_chain(*args)
  2013-10-24 21:26:00.451 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-10-24 21:26:00.451 | result = func(*args)
  2013-10-24 21:26:00.451 |   File "/usr/lib/python2.7/urllib2.py", line 527, 
in http_error_default
  2013-10-24 21:26:00.451 | raise HTTPError(req.get_full_url(), code, msg, 
hdrs, fp)
  2013-10-24 21:26:00.451 | HTTPError: HTTP Error 500: INTERNAL SERVER ERROR

  The horizon logs have the following error info:

  [Thu Oct 24 21:18:43 2013] [error] Internal Server Error: /project/
  [Thu Oct 24 21:18:43 2013] [error] Traceback (most recent call last):
  [Thu Oct 24 21:18:43 2013] [error]   File 
"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 
115, in get_response
  [Thu Oct 24 21:18:43 2013] [error] response = callback(request, 
*callback_args, **callback_kwargs)
  [Thu Oct 24 21:18:43 2013] [error]   File 
"/opt/stack/new/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py", 
line 38, in dec
 

[Yahoo-eng-team] [Bug 1250525] Re: nova-conductor did not start after upgrade

2017-06-14 Thread Sean Dague
This grenade bug was last updated over 180 days ago, as grenade
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: grenade
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250525

Title:
  nova-conductor did not start after upgrade

Status in grenade:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  From patch 4 in https://review.openstack.org/#/c/55251/
  From 
http://logs.openstack.org/51/55251/4/check/check-grenade-devstack-vm/35a5476/console.html

  2013-11-12 12:26:01.889 | The following services are not running after
  upgrade:  nova-conductor

  This can be seen in the new n-cpu log:

  2013-11-12 12:25:45.354 WARNING nova.conductor.api [req-ec7a159c-5dad-
  4eda-9aa6-b98d6ba0d1f1 None None] Timed out waiting for nova-
  conductor. Is it running? Or did this service start before nova-
  conductor?

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1250525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405626] Re: check-tempest-dsvm-ironic-pxe_ssh is failing

2017-06-14 Thread Sean Dague
This grenade bug was last updated over 180 days ago, as grenade
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: grenade
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405626

Title:
  check-tempest-dsvm-ironic-pxe_ssh is failing

Status in devstack:
  Fix Released
Status in diskimage-builder:
  Invalid
Status in grenade:
  Invalid
Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  since 2014/12/25, check-tempest-dsvm-ironic-pxe_ssh is failing due to
  a "mkdir" failure:

  http://logs.openstack.org/09/138009/6/check/check-tempest-dsvm-ironic-
  pxe_ssh/e334e39/logs/devstacklog.txt.gz

  2014-12-25 10:09:11.299 | ++ ramdisk-image-create ubuntu deploy-ironic -o 
/opt/stack/new/devstack/files/ir-deploy-pxe_ssh
  2014-12-25 10:09:11.308 | mkdir: cannot create directory 
'/opt/stack/new/.cache': Permission denied

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1405626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466485] Re: keystone fails with: ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option

2017-06-14 Thread Sean Dague
This grenade bug was last updated over 180 days ago, as grenade
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: grenade
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1466485

Title:
  keystone fails with: ArgsAlreadyParsedError: arguments already parsed:
  cannot register CLI option

Status in grenade:
  Invalid
Status in OpenStack Identity (keystone):
  Expired

Bug description:
  Grenade jobs in master fail with the following scenario:

  - grenade.sh attempts to list glance images [1];
  - glance fails because keystone httpd returns 500 [2];
  - keystone fails because "ArgsAlreadyParsedError: arguments already parsed: 
cannot register CLI option" [3]

  Sean Dague says that it's because grenade does not upgrade keystone
  script, and the script should not even be installed in a way it's now
  installed (copied into /var/www/...).

  Relevant thread: http://lists.openstack.org/pipermail/openstack-
  dev/2015-June/067147.html

  [1]: 
http://logs.openstack.org/66/185066/3/check/check-grenade-dsvm-neutron/45d8663/logs/grenade.sh.txt.gz#_2015-06-18_09_08_32_989
  [2]: 
http://logs.openstack.org/66/185066/3/check/check-grenade-dsvm-neutron/45d8663/logs/new/screen-g-api.txt.gz#_2015-06-18_09_08_42_531
  [3]: 
http://logs.openstack.org/66/185066/3/check/check-grenade-dsvm-neutron/45d8663/logs/apache/keystone.txt.gz#_2015-06-18_09_08_46_675874

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1466485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470625] Re: Mechanism to register and run all external neutron alembic migrations automatically

2017-06-14 Thread Sean Dague
** Changed in: devstack
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470625

Title:
  Mechanism to register and run all external neutron alembic migrations
  automatically

Status in devstack:
  Fix Released
Status in networking-cisco:
  Fix Committed
Status in networking-l2gw:
  Fix Committed
Status in neutron:
  Fix Released

Bug description:
  For alembic migration branches that are out-of-tree, we need a
  mechanism whereby the external code can register its branches when it
  is installed, and then neutron will provide automation of running all
  installed external migration branches when neutron-db-manage is used
  for upgrading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1470625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640335] Re: DisallowedHost: Invalid HTTP_HOST header

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1640335

Title:
  DisallowedHost: Invalid HTTP_HOST header

Status in devstack:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Every time I start horizon service, it reports the following exception:
  nicira@htb-1n-eng-dhcp8:~/devstack$ sudo tail -f 
/var/log/apache2/horizon_error.log | sed 's/\\x1b/\o033/g' & echo $! 
>/opt/stack/status/stack/horizon.pid; fg || echo "horizon failed to start" | 
tee "/opt/stack/status/stack/horizon.failure"
  [1] 8876
  sudo tail -f /var/log/apache2/horizon_error.log | sed 's/\\x1b/\o033/g'
  2016-11-08 22:44:54.799461 obj = self.var.resolve(context)
  2016-11-08 22:44:54.799471   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 789, in 
resolve
  2016-11-08 22:44:54.799480 value = self._resolve_lookup(context)
  2016-11-08 22:44:54.799488   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 849, in 
_resolve_lookup
  2016-11-08 22:44:54.799497 current = current()
  2016-11-08 22:44:54.799506   File 
"/usr/local/lib/python2.7/dist-packages/django/http/request.py", line 152, in 
build_absolute_uri
  2016-11-08 22:44:54.799515 host=self.get_host(),
  2016-11-08 22:44:54.799524   File 
"/usr/local/lib/python2.7/dist-packages/django/http/request.py", line 102, in 
get_host
  2016-11-08 22:44:54.799540 raise DisallowedHost(msg)
  2016-11-08 22:44:54.799568 DisallowedHost: Invalid HTTP_HOST header: 
'10.162.63.93'. You may need to add u'10.162.63.93' to ALLOWED_HOSTS.

  Horizon UI also reports "Internal Server Error"

  I am using the latest master code for Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1640335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387055] Re: glanceclient/shell.py:592: raise exc.CommandError("Invalid OpenStack Identity credentials.")

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1387055

Title:
  glanceclient/shell.py:592:raise exc.CommandError("Invalid
  OpenStack Identity credentials.")

Status in devstack:
  Invalid
Status in Glance:
  Won't Fix
Status in Glance Client:
  Won't Fix

Bug description:
  When using DevStack to install OpenStack, I got the following error:

  2014-10-29 15:06:45.890 | ++ glance --os-auth-token 
12a7b1359dc044ff9e9e1c81a2390152 --os-image-url http://192.168.56.101:9292 
image-create --name cirros-0.3.1-x86_64-uec-kernel --is-public True 
--container-format aki --disk-format aki
  2014-10-29 15:06:45.915 | ++ read data
  2014-10-29 15:06:50.456 | Invalid OpenStack Identity credentials.
  2014-10-29 15:06:50.584 | + KERNEL_ID=
  2014-10-29 15:06:50.640 | + '[' -n 
/home/xtrutri/Work/devstack/files/images/cirros-0.3.1-x86_64-uec/cirros-0.3.1-x86_64-initrd
 ']'
  2014-10-29 15:06:50.657 | ++ glance --os-auth-token 
12a7b1359dc044ff9e9e1c81a2390152 --os-image-url http://192.168.56.101:9292 
image-create --name cirros-0.3.1-x86_64-uec-ramdisk --is-public True 
--container-format ari --disk-format ari
  2014-10-29 15:06:50.681 | ++ grep ' id '
  2014-10-29 15:06:50.690 | ++ get_field 2
  2014-10-29 15:06:50.699 | ++ read data
  2014-10-29 15:06:55.658 | Invalid OpenStack Identity credentials.
  2014-10-29 15:06:55.733 | + RAMDISK_ID=
  2014-10-29 15:06:55.783 | + glance --os-auth-token 
12a7b1359dc044ff9e9e1c81a2390152 --os-image-url http://192.168.56.101:9292 
image-create --name cirros-0.3.1-x86_64-uec --is-public True --container-format 
ami --disk-format ami
  2014-10-29 15:07:00.966 | Invalid OpenStack Identity credentials.
  xtrutri@ubuntu:~/Work/devstack$ 2014-10-29 15:07:01.061 | + exit_trap


  DevStack version: at commit 8cedabcea8bb446f1c29aab42fbcbf5a87218f7f (Sat May 
10 12:24:16 2014 +)
  KeyStone version: 2014.2.rc1-106-gf45b3e5
  GlanceClient version: 0.14.1-11-gcfe0623
  Glance version: 2014.2.rc1-91-gded0852

  
  Here is the traceback:

  2014-10-29 15:06:57.263 16394 DEBUG glance.api.middleware.version_negotiation 
[-] Using url versioning process_request 
/opt/stack_stable_juno/glance/glance/api/middleware/version_negotiation.py:57
  2014-10-29 15:06:57.265 16394 DEBUG glance.api.middleware.version_negotiation 
[-] Matched version: v1 process_request 
/opt/stack_stable_juno/glance/glance/api/middleware/version_negotiation.py:69
  2014-10-29 15:06:57.266 16394 DEBUG glance.api.middleware.version_negotiation 
[-] new path /v1/images process_request 
/opt/stack_stable_juno/glance/glance/api/middleware/version_negotiation.py:70
  2014-10-29 15:06:57.267 16394 DEBUG keystoneclient.session [-] REQ: curl -i 
-X GET http://127.0.0.1:35357/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient" _http_log_request 
/opt/stack_stable_juno/python-keystoneclient/keystoneclient/session.py:162
  2014-10-29 15:06:57.270 16394 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: Unable to establish connection to 
http://127.0.0.1:35357/
  2014-10-29 15:06:57.772 16394 DEBUG keystoneclient.session [-] REQ: curl -i 
-X GET http://127.0.0.1:35357/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient" _http_log_request 
/opt/stack_stable_juno/python-keystoneclient/keystoneclient/session.py:162
  2014-10-29 15:06:57.775 16394 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: Unable to establish connection to 
http://127.0.0.1:35357/
  2014-10-29 15:06:58.777 16394 DEBUG keystoneclient.session [-] REQ: curl -i 
-X GET http://127.0.0.1:35357/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient" _http_log_request 
/opt/stack_stable_juno/python-keystoneclient/keystoneclient/session.py:162
  2014-10-29 15:06:58.782 16394 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: Unable to establish connection to 
http://127.0.0.1:35357/
  2014-10-29 15:07:00.793 16394 DEBUG keystoneclient.session [-] REQ: curl -i 
-X GET http://127.0.0.1:35357/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient" _http_log_request 
/opt/stack_stable_juno/python-keystoneclient/keystoneclient/session.py:162
  2014-10-29 15:07:00.803 16394 ERROR keystonemiddleware.auth_token [-] HTTP 
connection exception: Unable to establish connection to http://127.0.0.1:35357/
  2014-10-29 15:07:00.807 16394 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1448248] Re: Keystone Middleware Installation

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1448248

Title:
  Keystone Middleware Installation

Status in devstack:
  Invalid
Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Hi,

  I was performing openstack devstack juno installation, downloaded the
  scripts from github and got some keystone middleware error

  + install_keystonemiddleware
  + use_library_from_git keystonemiddleware
  + local name=keystonemiddleware
  + local enabled=1
  + [[ ,, =~ ,keystonemiddleware, ]]
  + return 1
  + pip_install_gr keystonemiddleware
  + local name=keystonemiddleware
  ++ get_from_global_requirements keystonemiddleware
  ++ local package=keystonemiddleware
  +++ cut -d# -f1
  +++ grep -h '^keystonemiddleware' 
/opt/stack/requirements/global-requirements.txt
  ++ local required_pkg=
  ++ [[ '' == '' ]]
  ++ die 1601 'Can'\''t find package keystonemiddleware in requirements'
  ++ local exitcode=0
  ++ set +o xtrace
  [ERROR] /home/stack/devstack/functions-common:1601 Can't find package 
keystonemiddleware in requirements
  + local 'clean_name=[Call Trace]
  ./stack.sh:781:install_keystonemiddleware
  /home/stack/devstack/lib/keystone:496:pip_install_gr
  /home/stack/devstack/functions-common:1535:get_from_global_requirements
  /home/stack/devstack/functions-common:1601:die'
  + pip_install '[Call' 'Trace]' ./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
  ++ set +o
  ++ grep xtrace
  + local 'xtrace=set -o xtrace'
  + set +o xtrace
  + sudo -H PIP_DOWNLOAD_CACHE=/var/cache/pip http_proxy= https_proxy= 
no_proxy= /usr/local/bin/pip install '[Call' 'Trace]' 
./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Exception:
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 246, 
in main
  status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
308, in run
  name, None, isolated=options.isolated_mode,
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
220, in from_line
  isolated=isolated)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
79, in __init__
  req = pkg_resources.Requirement.parse(req)
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 2960, in parse
  reqs = list(parse_requirements(s))
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 2891, in parse_requirements
  raise ValueError("Missing distribution spec", line)
  ValueError: ('Missing distribution spec', '[Call')

  + exit_trap
  + local r=2
  ++ jobs -p
  + jobs=
  + [[ -n '' ]]
  + kill_spinner
  + '[' '!' -z '' ']'
  + [[ 2 -ne 0 ]]
  + echo 'Error on exit'
  Error on exit
  + [[ -z '' ]]
  + /home/stack/devstack/tools/worlddump.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1448248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491152] Re: Don't run shelve tests in tempest if cells is enabled

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491152

Title:
  Don't run shelve tests in tempest if cells is enabled

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova controls the tests it runs (or doesn't run) for the cells
  devstack tempest job in:

  http://git.openstack.org/cgit/openstack/nova/tree/devstack/tempest-
  dsvm-cells-rc

  There are 3 tests in there that are blacklisted for shelve.

  Tempest provides a config option to not run the shelve tests:

  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/config.py#n343

  We should move that out of the nova rc file and into
  devstack/lib/tempest, like what is done for ironic:

  https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L526

  So in lib/tempest you'd check to see if the n-cells service is running
  and if so, initset tempest.conf to not run shelve tests, i.e.:

  if is_service_enabled n-cell; then
  iniset $TEMPEST_CONFIG compute-feature-enabled shelve False
  fi

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1491152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502369] Re: Jenkins/tox fails to generate docs

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1502369

Title:
  Jenkins/tox fails to generate docs

Status in devstack:
  Invalid
Status in Glance:
  Fix Released

Bug description:
  Hi,

  When we run "tox" in fresh clone of glance-specs repo, it fails with
  the below error (full log can be found here:
  http://paste.openstack.org/show/475234/):

  running build_ext
    Traceback (most recent call last):
  File "", line 1, in 
  File "/tmp/pip-build-OfhUFL/Pillow/setup.py", line 767, in 
    zip_safe=not debug_build(),
  File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
    dist.run_commands()
  File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File 
"/home/dramakri/glance-specs/.tox/py27/local/lib/python2.7/site-packages/wheel/bdist_wheel.py",
 line 175, in run
    self.run_command('build')
  File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run
    self.run_command(cmd_name)
  File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "/usr/lib/python2.7/distutils/command/build_ext.py", line 337, in run
    self.build_extensions()
  File "/tmp/pip-build-OfhUFL/Pillow/setup.py", line 515, in 
build_extensions
    % (f, f))
    ValueError: --enable-jpeg requested but jpeg not found, aborting.

    
    Failed building wheel for Pillow
  Failed to build Pillow

  This causes Jenkins also to fail on any submission. I noticed this issue when 
I tried to upload a new spec (https://review.openstack.org/#/c/230679/) to the 
Glance-specs folder and it failed.
  Link to the Jenkins log for the failed run: 
http://logs.openstack.org/79/230679/1/check/gate-glance-specs-docs/e34dc8b/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1502369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511505] Re: No handlers could be found for logger "oslo_config.cfg"

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1511505

Title:
  No handlers could be found for logger "oslo_config.cfg"

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Whenever I run a command using 'nova-manage', I get an unrelated
  warning followed by the output of my command:

  $ nova-manage vm list
  No handlers could be found for logger "oslo_config.cfg"
  instance   nodetype   state  launched ...
  

  Based on a quick bit of work with pdb, it seems this line is the
  culprit:

  
https://github.com/openstack/oslo.config/blob/e208b500464f25930392c48c6748a48c752f1ccf/oslo_config/cfg.py#L774

  We'd likely see this issue earlier/more often were a more verbose
  logging configuration set for 'oslo_config', but this is the only
  logging message typically issues by this . This is not a 'oslo_config'
  issue but a 'nova' one: the logger just seems to be misconfigured.

  There's also the issue of why this warning is occuring. I suspect out
  of date configuration in devstack (which I'm using)

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1511505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273386] Re: Neutron namespace metadata proxy triggers kernel crash on Ubuntu 12.04/3.2 kernel

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273386

Title:
  Neutron namespace metadata proxy triggers kernel crash on Ubuntu
  12.04/3.2 kernel

Status in devstack:
  Invalid
Status in neutron:
  Fix Released
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  In the past 9 days we have been seeing very frequent occurences of
  this kernel crash: http://paste.openstack.org/show/61869/

  Even if the particular crash pasted here is triggered by dnsmasq, in
  almost all cases the crash is actually triggered by the neutron metada
  proxy.

  This also affects nova badly since this issue, which appears namespace
  related, results in a hang while mounting the ndb device for key
  injection.

  logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImtlcm5lbCBCVUcgYXQgL2J1aWxkL2J1aWxkZC9saW51eC0zLjIuMC9mcy9idWZmZXIuYzoyOTE3XCIgYW5kIGZpbGVuYW1lOnN5c2xvZy50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wMS0xNlQxODo1MDo0OCswMDowMCIsInRvIjoiMjAxNC0wMS0yN1QxOToxNjoxMSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxMzkwODUwMzI2ODY0fQ==

  We have seen about 398 hits since the bug started to manifest.
  Decreased hit rate in the past few days is due to less neutron patches being 
pushed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1273386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287354] Re: check-grenade-dsvm-partial-ncpu breaks on libguestfs

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287354

Title:
  check-grenade-dsvm-partial-ncpu breaks on libguestfs

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  http://logs.openstack.org/65/41265/40/check/check-grenade-dsvm-
  partial-ncpu/5db5bcf/logs/old/screen-n-cpu.txt.gz?level=AUDIT

  
  
http://logs.openstack.org/65/41265/40/check/check-grenade-dsvm-partial-ncpu/5db5bcf/

  http://logs.openstack.org/65/41265/40/check/check-grenade-dsvm-
  partial-ncpu/5db5bcf/logs/testr_results.html.gz

  2014-03-03 18:35:59.215 6130 TRACE nova.servicegroup.drivers.db exc.info, 
real_topic, msg.get('method'))
  2014-03-03 18:35:59.215 6130 TRACE nova.servicegroup.drivers.db Timeout: 
Timeout while waiting on RPC response - topic: "conductor", RPC method: 
"service_update" info: ""
  2014-03-03 18:35:59.215 6130 TRACE nova.servicegroup.drivers.db 
  2014-03-03 18:35:59.216 6130 WARNING nova.openstack.common.loopingcall [-] 
task run outlasted interval by 50.01159 sec
  2014-03-03 18:36:38.563 6130 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: Timeout while waiting on 
RPC response - topic: "conductor", RPC method: "object_class_action" info: 
""
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/openstack/common/periodic_task.py", line 180, in 
run_periodic_tasks
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/compute/manager.py", line 4881, in 
update_available_resource
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/openstack/common/lockutils.py", line 246, in inner
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/compute/resource_tracker.py", line 296, in 
update_available_resource
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
context, self.host, self.nodename)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/objects/base.py", line 106, in wrapper
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
args, kwargs)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/conductor/rpcapi.py", line 492, in object_class_action
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
objver=objver, args=args, kwargs=kwargs)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/rpcclient.py", line 85, in call
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
return self._invoke(self.proxy.call, ctxt, method, **kwargs)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/rpcclient.py", line 63, in _invoke
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
return cast_or_call(ctxt, msg, **self.kwargs)
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/old/nova/nova/openstack/common/rpc/proxy.py", line 130, in call
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
exc.info, real_topic, msg.get('method'))
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task 
Timeout: Timeout while waiting on RPC response - topic: "conductor", RPC 
method: "object_class_action" info: ""
  2014-03-03 18:36:38.563 6130 TRACE nova.openstack.common.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1287354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330132] Re: Creation of Member role is no longer required

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1330132

Title:
  Creation of Member role is no longer required

Status in devstack:
  Invalid
Status in OpenStack Identity (keystone):
  Fix Released
Status in tempest:
  Confirmed

Bug description:
  Since Grizzly the Keystone service's SQL creation/migration scripts
  automatically create a role named _member_ for use as the default
  member role. Since Icehouse (backported to Havana) Horizon uses this
  as the default member role.

  Devstack still creates a Member role, as was previously required:

  318 # The Member role is used by Horizon and Swift so we need to keep it:
  319 MEMBER_ROLE=$(openstack role create \
  320 Member \
  321 | grep " id " | get_field 2)

  As noted above, Horizon no longer uses such a role in the default
  configuration and on investigation the Swift dependency appears to be
  introduced by the way devstack configures Swift.

  As such it should now be possible to stop creating this role (with
  corresponding changes to the Swift setup in devstack) and use _member_
  instead, avoiding the creation (and confusion) of having two member
  roles with different names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1330132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412653] Re: ofagent decomposition

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412653

Title:
  ofagent decomposition

Status in devstack:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  this bug is to track the status of neutron core-vendor-decomposition
  for ofagent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1412653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421863] Re: "Can not find policy directory: policy.d" spams the logs

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421863

Title:
  "Can not find policy directory: policy.d" spams the logs

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in devstack:
  Invalid
Status in heat:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Won't Fix
Status in oslo.policy:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Invalid

Bug description:
  This hits over 118 million times in 24 hours in Jenkins runs:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2FuIG5vdCBmaW5kIHBvbGljeSBkaXJlY3Rvcnk6IHBvbGljeS5kXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzg2Njk0MTcxOH0=

  We can probably just change something in devstack to avoid this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1421863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 995287] Re: Support using translations in devstack

2017-06-14 Thread Sean Dague
This bug was last updated over 180 days ago, as devstack is a fast moving 
project
and we'd like to get the tracker down to currently actionable bugs, this is 
getting
marked as Invalid. If the issue still exists, please feel free to reopen it.


** Changed in: devstack
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/995287

Title:
  Support using translations in devstack

Status in Cinder:
  Fix Released
Status in devstack:
  Invalid
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released

Bug description:
  We go to the trouble to install translations catalogs, but then we
  don't properly use them. We really need to fix that. There is a great
  example of how to use in-tree messages can be found in the sphinx
  source:

  
https://bitbucket.org/birkenfeld/sphinx/src/5d4cd2cca317/sphinx/locale/__init__.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/995287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265057] Re: gate/check-grenade-dsvm: Horizon front page not functioning!

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265057

Title:
  gate/check-grenade-dsvm: Horizon front page not functioning!

Status in devstack:
  Invalid
Status in grenade:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Core Infrastructure:
  Incomplete

Bug description:
  Grenade dsvm fails with horizon :

  2013-12-30 16:15:32.294 | + [[ 
,g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,tempest,c-bak,n-net,
 =~ ,horizon, ]]
  2013-12-30 16:15:32.294 | + return 0
  2013-12-30 16:15:32.295 | + curl http://127.0.0.1
  2013-12-30 16:15:32.295 | + grep -q 'h3>Log In/h3>'
  2013-12-30 16:15:32.337 | + die 39 'Horizon front page not functioning!'
  2013-12-30 16:15:32.338 | + local exitcode=1
  2013-12-30 16:15:32.339 | + set +o xtrace
  2013-12-30 16:15:32.339 | [Call Trace]
  2013-12-30 16:15:32.377 | /opt/stack/new/devstack/exercises/horizon.sh:39:die
  2013-12-30 16:15:32.379 | [ERROR] 
/opt/stack/new/devstack/exercises/horizon.sh:39 Horizon front page not 
functioning!

  example review where it's failing
  https://review.openstack.org/#/c/60707

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1265057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263122] Re: services should not restart on SIGHUP when running in the foreground

2017-06-14 Thread Sean Dague
This devstack bug was last updated over 180 days ago, as devstack
is a fast moving project and we'd like to get the tracker down to
currently actionable bugs, this is getting marked as Invalid. If the
issue still exists, please feel free to reopen it.

** Changed in: devstack
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263122

Title:
  services should not restart on SIGHUP when running in the foreground

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released

Bug description:
  As reported on the mailing list (http://lists.openstack.org/pipermail
  /openstack-dev/2013-December/022796.html) the behavior of the
  ServiceLauncher has changed in a way that breaks devstack.

  The work for blueprint https://blueprints.launchpad.net/oslo/+spec
  /cfg-reload-config-files introduced changes to have the process
  "restart" on SIGHUP, but screen under devstack also uses that signal
  to kill the services. That means lots of developers are having to
  manually kill services to avoid having multiple copies running.

  To fix the problem we should only restart on SIGHUP when not running
  in the foreground. There are a few suggestions for detecting
  foreground operation on http://stackoverflow.com/questions/2425005
  /how-do-i-know-if-an-c-programs-executable-is-run-in-foreground-or-
  background

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1263122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696111] Re: Keystone confuses users when creating a trust when there's a roles name conflict

2017-06-14 Thread Kristi Nikolla
Also affects python-keystoneclient as it only support names. [0]
Agree that the correct solution is to allow ids also.

0. https://github.com/openstack/python-
keystoneclient/blob/71af540c81ecb933d912ef5ecde128afcc0deeeb/keystoneclient/v3/contrib/trusts.py#L41

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1696111

Title:
  Keystone confuses users when creating a trust when there's a roles
  name conflict

Status in OpenStack Identity (keystone):
  Triaged
Status in python-keystoneclient:
  New
Status in python-openstackclient:
  New

Bug description:
  Due to code [1] Keystone produces a confusing message when:

  * We're using python-openstackclient
  * We're creating a trust with a role name that exists in more that one domain.

  "role %s is not defined" suggests that there isn't a role like that.
  What actually happens, Keystone cannot decide which role is the user's
  choice.

  python-openstackclient automatically converts role ids to role names
  when sending a POST request, so specifying roles using an id doesn't
  help at all.


  [1]
  
https://github.com/openstack/keystone/blob/03319d1/keystone/trust/controllers.py#L90-L94

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1696111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696893] Re: Arping code should detect missing interface and return early

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/472500
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=739daaa9555e734b94bef89f4fe1c5159c8fd435
Submitter: Jenkins
Branch:master

commit 739daaa9555e734b94bef89f4fe1c5159c8fd435
Author: Brian Haley 
Date:   Thu Jun 8 22:40:19 2017 -0400

Stop arping when interface gets deleted

It is possible for an interface to be added to a
router, have arping get started for it in a thread,
then immediately remove the interface, causing
arping errors in the l3-agent log.  This concurrent
deletion should be handled more gracefully by
just logging a warning on the first detection and
returning early.

Change-Id: I615b60561b3b7f8c950d5f412fb4cdf7877b98f7
Closes-bug: #1696893


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696893

Title:
  Arping code should detect missing interface and return early

Status in neutron:
  Fix Released

Bug description:
  Since arping is spawned in a thread, if a router is added and quickly
  removed from a network, the arping calls could generate errors on the
  second or third loop, for example:

  Exit code: 2; Stdin: ; Stdout: ; Stderr: arping: Device qr-1e77796c-2b
  not available.

  This can happen in this scenario:

  T(0): internal_network_added()
  port plugged
  arping started in thread

  T(1): internal_network_removed()
  port unplugged

  T(2): arping fails
  T(3): arping fails

  An example is in:

  http://logs.openstack.org/02/469602/6/check/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-
  xenial/7a048d9/logs/screen-q-l3.txt.gz#_Jun_09_00_23_55_483118

  Just search for qr-1e77796c-2b in the logs before this time.

  The arping code should detect this on a failure, log a warning and
  return early as there is no way to stop the thread once it is spawned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697733] Re: LANG is explicitly set to C, but some services (like glance) want to read files with utf8 characters

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/473919
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=d095e97624467fb1e0fa38955b45960d3cbc5651
Submitter: Jenkins
Branch:master

commit d095e97624467fb1e0fa38955b45960d3cbc5651
Author: Clark Boylan 
Date:   Tue Jun 13 10:18:36 2017 -0700

Support unicode via en_US.utf8

Because C.utf8 is not everywhere and is sometimes called C.UTF-8 (just
to confuse people) use en_US.utf8 which is in most places. This isn't
language/region agnostic but gives a consistent unicode aware locale to
devstack.

Change-Id: I67a8c77a5041e9cee740adf0e02fdc9b183c5bc4
fixes-bug: 1697733


** Changed in: devstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1697733

Title:
  LANG is explicitly set to C, but some services (like glance) want to
  read files with utf8 characters

Status in devstack:
  Fix Released
Status in Glance:
  New

Bug description:
  glance-manage throws errors under the python3.5 job because it
  attempts to open and read a file with utf8 characters in it, but
  devstack has hard set LANG=C.

  ERROR glance.db.sqlalchemy.metadata [-] Failed to parse json file
  /etc/glance/metadefs/compute-trust.json while populating metadata due
  to: 'ascii' codec can't decode byte 0xc2 in position 90: ordinal not
  in range(128): UnicodeDecodeError: 'ascii' codec can't decode byte
  0xc2 in position 90: ordinal not in range(128)

  This only happens under python3 because python3 open() will refer to
  locale.getpreferredencoding() by default if no encoding is explicitly
  set. Python2 doesn't have this problem because strings and open
  operate on binary not encoded things.

  Devstack sets LANG=C at:
  https://git.openstack.org/cgit/openstack-dev/devstack/tree/stack.sh#n30

  Example job run where this happens:
  
http://logs.openstack.org/10/367810/41/check/gate-tempest-dsvm-py35-ubuntu-xenial/89634cf/logs/devstacklog.txt.gz#_2017-06-13_14_25_15_262

  One thing that makes this tricky is that open() under python2 doesn't
  take an encoding while open() under python3 does. Easy enough to
  handle this in code but maybe we should try and get six to address
  this?

  Also worth noting that the infra test nodes should have a locale of
  C.utf8 or C.UTF-8, but these locales are apparently (not yet)
  universal.

  Considering that devstack wants to enforce and ascii locale the
  simplest option here may just be to remove the utf8 characters from
  the metadata json files. '®' and '–' are the two characters which can
  be replaced with '(R)' and '-'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1697733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632540] Re: l3-agent print the ERROR log in l3 log file continuously , finally fill file space, leading to crash the l3-agent service

2017-06-14 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: neutron-proactive-backport-potential

** Tags removed: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632540

Title:
  l3-agent print the ERROR log in l3 log file continuously ,finally fill
  file space,leading to crash the l3-agent service

Status in neutron:
  Fix Released

Bug description:
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
[req-5d499217-05b6-4a56-a3b7-5681adb53d6c - d2b95803757641b6bc55f6309c12c6e9 - 
- -] Failed to process compatible router 'da82aeb4-07a4-45ca-ae7a-570aec69df29'
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 501, in 
_process_router_update
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 438, in 
_process_router_if_compatible
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 446, in 
_process_added_router
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent ri.process(self)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
488, in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_router_base.py", line 
30, in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(DvrRouterBase, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 386, in 
process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 385, in call
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent self.logger(e)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.force_reraise()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 382, in call
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent return func(*args, 
**kwargs)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 964, 
in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.process_address_scope()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_edge_router.py", line 
239, in process_address_scope
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.snat_iptables_manager, ports_scopemark)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent self.gen.next()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 461, in defer_apply
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent raise 
n_exc.IpTablesApplyException(msg)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
IpTablesApplyException: Failure applying iptables rules

  for example,this ERROR information will fill l3-agent log file
  continuously until solving the problem ,it will fill the log file
  space.

  because we resyc the ERROR update into the queue when the update is
  not been handle successfully.then the greenthread in l3-agent will
  deal with the update periodicly,so print the log periodicly, but the
  l3 agent has been deal with this update,wo should delete this update.

  we could disable l3-agent in a networknode in ha model, then create
  router,then restart the 

[Yahoo-eng-team] [Bug 1693917] Re: test_user_account_lockout failed in gate because authN attempts took longer than usual

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/473488
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=dcd4b64c990660f9b11b999a3b70e17c36323c4c
Submitter: Jenkins
Branch:master

commit dcd4b64c990660f9b11b999a3b70e17c36323c4c
Author: Lance Bragstad 
Date:   Mon Jun 12 14:41:42 2017 +

Increase KEYSTONE_LOCKOUT_DURATION to 10

Transient failures were being reported because the current lockout
period for users was too short. While this does increase the
run time IdentityV3UsersTest.test_user_account_lockout, it
allows for more flexibility if there is network latency or some
other factor that cause the lockout to expired before the
next authentication.

Change-Id: I61bc39bbc35ac414b4a72929a90845956c99eb1a
Closes-Bug: 1693917


** Changed in: devstack
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1693917

Title:
  test_user_account_lockout failed in gate because authN attempts took
  longer than usual

Status in devstack:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in tempest:
  New

Bug description:
  http://logs.openstack.org/99/460399/2/check/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/f7eb334/logs/testr_results.html.gz

  ft1.2: 
tempest.api.identity.v3.test_users.IdentityV3UsersTest.test_user_account_lockout[id-a7ad8bbf-2cff-4520-8c1d-96332e151658]_StringException:
 pythonlogging:'': {{{
  2017-05-24 21:05:50,147 32293 INFO [tempest.lib.common.rest_client] 
Request (IdentityV3UsersTest:test_user_account_lockout): 201 POST 
https://15.184.66.148/identity/v3/auth/tokens
  2017-05-24 21:05:50,147 32293 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Accept': 'application/json', 'Content-Type': 
'application/json'}
  Body: 
  Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '344', u'x-openstack-request-id': 
'req-11e47cfa-6b25-47d4-977a-94f3e6d95665', 'status': '201', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:50 GMT', 
u'x-subject-token': '', u'content-type': 'application/json'}
  Body: {"token": {"issued_at": "2017-05-24T21:05:50.00Z", 
"audit_ids": ["GQR0RZcDSWC_bslZSUzpGg"], "methods": ["password"], "expires_at": 
"2017-05-24T22:05:50.00Z", "user": {"password_expires_at": null, "domain": 
{"id": "default", "name": "Default"}, "id": "415e3f0e215f44a586bdf62e7ea6e02d", 
"name": "tempest-IdentityV3UsersTest-343470382"}}}
  2017-05-24 21:05:50,237 32293 INFO [tempest.lib.common.rest_client] 
Request (IdentityV3UsersTest:test_user_account_lockout): 401 POST 
https://15.184.66.148/identity/v3/auth/tokens
  2017-05-24 21:05:50,238 32293 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Accept': 'application/json', 'Content-Type': 
'application/json'}
  Body: 
  Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '114', u'x-openstack-request-id': 
'req-0a45b9b8-4c7c-409c-9c8d-f6b2661c234f', 'status': '401', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:50 GMT', 
u'content-type': 'application/json', u'www-authenticate': 'Keystone 
uri="https://15.184.66.148/identity;'}
  Body: {"error": {"message": "The request you have made requires 
authentication.", "code": 401, "title": "Unauthorized"}}
  2017-05-24 21:05:54,909 32293 INFO [tempest.lib.common.rest_client] 
Request (IdentityV3UsersTest:test_user_account_lockout): 401 POST 
https://15.184.66.148/identity/v3/auth/tokens
  2017-05-24 21:05:54,910 32293 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Accept': 'application/json', 'Content-Type': 
'application/json'}
  Body: 
  Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '114', u'x-openstack-request-id': 
'req-3dbd065f-826b-497d-86bc-2bc78a0de997', 'status': '401', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:50 GMT', 
u'content-type': 'application/json', u'www-authenticate': 'Keystone 
uri="https://15.184.66.148/identity;'}
  Body: {"error": {"message": "The request you have made requires 
authentication.", "code": 401, "title": "Unauthorized"}}
  2017-05-24 21:05:55,106 32293 INFO [tempest.lib.common.rest_client] 
Request (IdentityV3UsersTest:test_user_account_lockout): 201 POST 
https://15.184.66.148/identity/v3/auth/tokens
  2017-05-24 21:05:55,106 32293 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Accept': 'application/json', 'Content-Type': 
'application/json'}

[Yahoo-eng-team] [Bug 1697960] [NEW] enable_new_services=False should only auto-disable nova-compute services

2017-06-14 Thread Matt Riedemann
Public bug reported:

This came up in the mailing list:

http://lists.openstack.org/pipermail/openstack-
operators/2017-June/013765.html

And was agreed that it can be considered a bug that the
enable_new_services config option should only auto-disable new nova-
compute services:

http://lists.openstack.org/pipermail/openstack-
operators/2017-June/013771.html

It should not auto-disable things like nova-conductor, nova-scheduler or
nova-osapi_compute, since (1) it doesn't make sense to disable those and
(2) it just means the operator/admin has to enable them later to fix the
nova service-list output.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697960

Title:
  enable_new_services=False should only auto-disable nova-compute
  services

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This came up in the mailing list:

  http://lists.openstack.org/pipermail/openstack-
  operators/2017-June/013765.html

  And was agreed that it can be considered a bug that the
  enable_new_services config option should only auto-disable new nova-
  compute services:

  http://lists.openstack.org/pipermail/openstack-
  operators/2017-June/013771.html

  It should not auto-disable things like nova-conductor, nova-scheduler
  or nova-osapi_compute, since (1) it doesn't make sense to disable
  those and (2) it just means the operator/admin has to enable them
  later to fix the nova service-list output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217874] Re: cinder shows wrong device name for attached volume

2017-06-14 Thread Sean Dague
This is not fixable for libvirt.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217874

Title:
  cinder shows wrong device name for attached volume

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Using grizzly on centos 6.4, attached volumes are shown with wrong device 
names in cinder. E.g. I have this volume wich I attach, cinder tells me its 
attached as /dev/vdb:
   | attachments  | [{u'device': u'/dev/vdb', u'server_id': 
u'3318b373-8792-4109-b65c-ee138dcd525f', u'id': 
u'c424719d-1031-4e29-8086-8a62b385d99a', u'volume_id': 
u'c424719d-1031-4e29-8086-8a62b385d99a'}] |

   but on the machine (ubuntu 12.04 cloud image) the device is /dev/vdc:

  root@u15:~# cat /proc/partitions 
  major minor  #blocks  name

   25302097152 vda
   25312088450 vda1
   253   16400 vdb

  -> attaching volume to instance:
  root@u15:~# cat /proc/partitions 
  major minor  #blocks  name

   25302097152 vda
   25312088450 vda1
   253   16400 vdb
   253   321048576 vdc
   253   331047552 vdc1

  Maybe this is not cinders fault, as it the volume seems to be the
  second attached storage device? could be due to ubuntus image, but
  still, the device name is wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1217874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246160] Re: shuffle method bring potential security issue

2017-06-14 Thread Sean Dague
There is really very low exposure here

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246160

Title:
  shuffle method bring potential security issue

Status in OpenStack Compute (nova):
  Opinion
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  In the /nova/utils.py, line 328, the source code is below

  r.shuffle(password)

  This code is using shuffle method to generate a random number,
  Standard random number generators should not be used to generate
  randomness used for security reasons. For security sensitive
  randomness a crytographic randomness generator that provides
  sufficient entropy should be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211016] Re: List availability zones fails in a cell setup

2017-06-14 Thread Sean Dague
** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211016

Title:
  List availability zones fails in a cell setup

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Parent cell is not aware of the compute nodes and aggregates in child
  cells. For this reason nova api running in the parent cell doesn’t
  list availability zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1211016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696830] Re: nova-placement-api default config files is too strict

2017-06-14 Thread Corey Bryant
** Also affects: snap-neutron
   Importance: Undecided
   Status: New

** Also affects: snap-keystone
   Importance: Undecided
   Status: New

** Also affects: snap-nova-hypervisor
   Importance: Undecided
   Status: New

** Also affects: snap-glance
   Importance: Undecided
   Status: New

** Changed in: snap-nova-hypervisor
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Changed in: snap-keystone
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Changed in: snap-glance
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Changed in: snap-neutron
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Changed in: snap-neutron
   Importance: Undecided => Critical

** Changed in: snap-keystone
   Importance: Undecided => Critical

** Changed in: snap-nova
   Importance: Undecided => Critical

** Changed in: snap-nova-hypervisor
   Importance: Undecided => Critical

** Changed in: snap-glance
   Importance: Undecided => Critical

** Changed in: snap-keystone
   Importance: Critical => High

** Changed in: snap-glance
   Importance: Critical => High

** Changed in: snap-neutron
   Importance: Critical => High

** Changed in: snap-nova-hypervisor
   Importance: Critical => High

** Changed in: snap-nova
   Importance: Critical => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1696830

Title:
  nova-placement-api default config files is too strict

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.config:
  In Progress
Status in Glance Snap:
  New
Status in Keystone Snap:
  New
Status in Neutron Snap:
  New
Status in Nova Snap:
  In Progress
Status in Nova Hypervisor Snap:
  New

Bug description:
  If nova.conf doesn't exist in the typical location of
  /etc/nova/nova.conf and OS_PLACEMENT_CONFIG_DIR isn't set, nova-
  placement-api's wsgi application will fail. In our case with the
  OpenStack snap, we have two possible paths we may pick nova.conf up
  from, based on what --config-file specifies. I think the right answer
  here is to be a bit more flexible and not set the default config file
  if it's path doesn't exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1696830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693679] Re: Stopped instance's disk sizes are not calculated for disk_available_least

2017-06-14 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/newton
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/newton
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/ocata
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1693679

Title:
  Stopped instance's disk sizes are not calculated for
  disk_available_least

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Description
  ===
  disk_available_least is a free disk size information of hypervisors.
  This is calculated by the following formula:

  disk_available_least =  - 

  But stopped instance's virtual disk sizes are not calculated now.
  So disk_available_least will be larger than actual free disk size.
  As a result, instances will be scheduled beyond the actual free disk size if 
stopped instances are on a host.

  I think that this is a bug.
  Because stopped instances are on a host unlike shelved instances.

  Steps to reproduce
  ==
  1. Call hyper visor show API for any hypervisor.
     And, check the value of disk_available_least.
  2. Create a instance with qcow2 image on 1's hypervisor.
  3. Wait for over 1 minute.
  4. Call hyper visor show API.
     And, check that disk_available_least is smaller than step 1's value.
  5. Call Stop Server API for the instance.
  6. Wait until instance's state is changed to STOPPED.
  7. Wait for over 1 minute.
  8. Call hyper visor show API.
     And, check the value of disk_available_least.

  Expected result
  ===
  disk_available_least value is same as step 8.
  Because stopped instance is still on the host.

  Actual result
  =
  disk_available_least value is bigger than step4's value in step 8.

  Environment
  ===
  * I used latest devstack.
  * I used libvirt + kvm.
  * I used qcow2 image.

  Logs & Configs
  ==
  I think that this bug affects for all settings.

  When was this bug made?
  ===
  Following patch made this bug:
  https://review.openstack.org/#/c/105127/

  Stopped instance's disk sizes were calculated until merging the above
  patch in Juno cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1693679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697932] [NEW] Nova doesn't show instances in error state when a marker specified

2017-06-14 Thread Vladislav Kuzmin
Public bug reported:

Description
===
When we use pagination in nova it doesn't show instances in "Error" state.
But if we do just `nova list` it show all instances.

Steps to reproduce
==
* Create one instance in Error state
* Create two instances in Active state

Let's show all instances
$ nova list
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| 197a0316-a156-47b7-8e2b-3e915f8010bc | inst1 | ERROR  | -  | NOSTATE  
   ||
| e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c | inst2 | ACTIVE | -  | Running  
   | public=2001:db8::b, 172.24.4.4 |
| 4cb491a7-79fd-47d4-a9f7-96a5594f940a | inst3 | ACTIVE | -  | Running  
   | public=2001:db8::c, 172.24.4.2 |
+--+---+++-++
or
$ nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort 
display_name:desc
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
$ nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort 
display_name:asc
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| 4cb491a7-79fd-47d4-a9f7-96a5594f940a | inst3 | ACTIVE | -  | Running  
   | public=2001:db8::c, 172.24.4.2 |
+--+---+++-++

Expected result
===
After the execution of the steps above on the step when we do
`nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort 
display_name:desc` 
we should see "inst1".

Actual result
=
After the execution of the steps above on the step when we do
`nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort 
display_name:desc` 
we can't see anything.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697932

Title:
  Nova doesn't show instances in error state when a marker specified

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When we use pagination in nova it doesn't show instances in "Error" state.
  But if we do just `nova list` it show all instances.

  Steps to reproduce
  ==
  * Create one instance in Error state
  * Create two instances in Active state

  Let's show all instances
  $ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | 197a0316-a156-47b7-8e2b-3e915f8010bc | inst1 | ERROR  | -  | 
NOSTATE ||
  | e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c | inst2 | ACTIVE | -  | 
Running | public=2001:db8::b, 172.24.4.4 |
  | 4cb491a7-79fd-47d4-a9f7-96a5594f940a | inst3 | ACTIVE | -  | 
Running | public=2001:db8::c, 172.24.4.2 |
  
+--+---+++-++
  or
  $ nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort 
display_name:desc
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  $ nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort 
display_name:asc
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | 4cb491a7-79fd-47d4-a9f7-96a5594f940a | inst3 | 

[Yahoo-eng-team] [Bug 1697937] [NEW] TC shouldn't raise an exception when deleting qdisc if device doesn't exist

2017-06-14 Thread Rodolfo Alonso
Public bug reported:

TC shouldn't raise an exception when deleting qdisc if device doesn't
exist.

When Linux Bridge agent deletes a port or detect a port was deleted, QoS
extension is informed to clean any traffic shapping in this port by
deleting the qdisc. If the port was already deleted, TC will raise an
exception:

ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot
find device "tape0f6e79a-0b"

Logs: http://logs.openstack.org/85/473685/3/check/gate-tempest-dsvm-
neutron-scenario-linuxbridge-ubuntu-xenial-
nv/427484d/logs/screen-q-agt.txt.gz?level=WARNING#_Jun_13_07_36_14_943304

In this case, TC should catch silently this exception.

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697937

Title:
  TC shouldn't raise an exception when deleting qdisc if device doesn't
  exist

Status in neutron:
  New

Bug description:
  TC shouldn't raise an exception when deleting qdisc if device doesn't
  exist.

  When Linux Bridge agent deletes a port or detect a port was deleted,
  QoS extension is informed to clean any traffic shapping in this port
  by deleting the qdisc. If the port was already deleted, TC will raise
  an exception:

  ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot
  find device "tape0f6e79a-0b"

  Logs: http://logs.openstack.org/85/473685/3/check/gate-tempest-dsvm-
  neutron-scenario-linuxbridge-ubuntu-xenial-
  nv/427484d/logs/screen-q-agt.txt.gz?level=WARNING#_Jun_13_07_36_14_943304

  In this case, TC should catch silently this exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697787] Re: api-ref: PUT /os-services/disable description is misleading

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/473997
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a9ba1be7b74153380088eccfce5a8f071d4fca32
Submitter: Jenkins
Branch:master

commit a9ba1be7b74153380088eccfce5a8f071d4fca32
Author: Matt Riedemann 
Date:   Tue Jun 13 18:39:14 2017 -0400

api-ref: fix misleading description in PUT /os-services/disable

The PUT /os-services/disable API does not actually check the
request body for a reason why the service is being disabled.
That's what PUT /os-services/disable-log-reason is for.

This removes that incorrect part of the API description.

Closes-Bug: #1697787

Change-Id: I7a0bbdad842e5d420085777d4fe2f9e6d3e94360


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697787

Title:
  api-ref: PUT /os-services/disable description is misleading

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://developer.openstack.org/api-ref/compute/?expanded=disable-
  scheduling-for-a-compute-service-detail#disable-scheduling-for-a
  -compute-service

  Says: "Disables scheduling for a Compute service with optional
  logging."

  The "with optional logging" is wrong, that's not supported, and that's
  why we have the "PUT /os-services/disable-log-reason" API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1152623] Re: RFC2616 section 9.7 status code vs. nova server delete

2017-06-14 Thread Sean Dague
It's fine that people want to change these, we're just not going to
track them very effectively as bugs.

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1152623

Title:
  RFC2616 section 9.7 status code vs. nova server delete

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  In REST client implementation is common good practice, when:
  - request causes an synchronous and asynchronous effect , and
  - the synchronous operation has any immediately visible effect ie. immediate 
subsequent request showing any change,
  we should emphasize the synchronous behavior in the responses (Status code)  
(Or responding in way which does not distinguish the two cases).

  However if the HTTP method is DELETE, the rule is the opposite! 
  If the resource on the request URL does not deleted the service MUST NOT 
response with 204.

  "
 A successful response SHOULD be 200 (OK) if the response includes an
 entity describing the status, 202 (Accepted) if the action has not
 yet been enacted, or 204 (No Content) if the action has been enacted
 but the response does not include an entity.
  " by RFC2616 section 9.7

  It means if a DELETE request responded with 204 status code, I MUST
  get 404 in an immediate subsequent request, unless concurrent
  operation recreated the resource.

  
  $ nova --debug delete ab0ebda6-2c21-4258-8934-1005b970fee5 ; nova --debug 
show ab0ebda6-2c21-4258-8934-1005b970fee5

  Part of the output in the received order:
  -
  REQ: curl -i 
http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 -X DELETE -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-Auth-Token: c35f5783528d4131bf100604b2fabd6c"

  send: u'DELETE 
/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 HTTP/1.1\r\nHost: 10.34.69.149:8774\r\nx-auth-project-id: 
admin\r\nx-auth-token: c35f5783528d4131bf100604b2fabd6c\r\naccept-encoding: 
gzip, deflate\r\naccept: application/json\r\nuser-agent: 
python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 204 No Content\r\n'
  header: Content-Length: 0
  header: X-Compute-Request-Id: req-53e3503a-8d73-4ffc-ba43-4bd5659a9e22
  header: Content-Type: application/json
  header: Date: Sat, 02 Mar 2013 18:26:21 GMT
  RESP:{'date': 'Sat, 02 Mar 2013 18:26:21 GMT', 'status': '204', 
'content-length': '0', 'content-type': 'application/json', 
'x-compute-request-id': 'req-53e3503a-8d73-4ffc-ba43-4bd5659a9e22'} 
  -
  REQ: curl -i 
http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-Auth-Token: f74d6c7226c14915a26a81b540d43f3b"

  connect: (10.34.69.149, 8774)
  send: u'GET 
/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5
 HTTP/1.1\r\nHost: 10.34.69.149:8774\r\nx-auth-project-id: 
admin\r\nx-auth-token: f74d6c7226c14915a26a81b540d43f3b\r\naccept-encoding: 
gzip, deflate\r\naccept: application/json\r\nuser-agent: 
python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: X-Compute-Request-Id: req-80c97c68-0b44-4650-b027-84a85ee04b86
  header: Content-Type: application/json
  header: Content-Length: 1502
  header: Date: Sat, 02 Mar 2013 18:26:21 GMT
  RESP:{'status': '200', 'content-length': '1502', 'content-location': 
u'http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5',
 'x-compute-request-id': 'req-80c97c68-0b44-4650-b027-84a85ee04b86', 'date': 
'Sat, 02 Mar 2013 18:26:21 GMT', 'content-type': 'application/json'} {"server": 
{"status": "ACTIVE", "updated": "2013-03-02T18:26:21Z", "hostId": 
"31bdffcdffd5b869b87c9be3cdd700e29c4a08286d6d306622b4815a", 
"OS-EXT-SRV-ATTR:host": "new32.lithium.rhev.lab.eng.brq.redhat.com", 
"addresses": {"novanetwork": [{"version": 4, "addr": "192.168.32.2"}]}, 
"links": [{"href": 
"http://10.34.69.149:8774/v2/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5;,
 "rel": "self"}, {"href": 
"http://10.34.69.149:8774/89a38fe6d3194864995ab0872905a65e/servers/ab0ebda6-2c21-4258-8934-1005b970fee5;,
 "rel": "bookmark"}], "key_name": null, "image": {"id": 
"12e9c131-aaf4-4f73-9659-ed2da9759cd2", "links": [{"href": "http://10.34.69.149:
 
8774/89a38fe6d3194864995ab0872905a65e/images/12e9c131-aaf4-4f73-9659-ed2da9759cd2",
 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": "deleting", 
"OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": 
"instance-0003", "OS-EXT-SRV-ATTR:hypervisor_hostname": 
"new32.lithium.rhev.lab.eng.brq.redhat.com", "flavor": {"id": 

[Yahoo-eng-team] [Bug 1697564] Re: Failed to resize instance after changing ssh's port

2017-06-14 Thread Sean Dague
Changing the ssh port of nova computes is not supported. The nova-
compute services should not be internet accessible (if so there are many
more issues you might run into). As such the policy of moving well known
ports has no security value.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697564

Title:
  Failed to resize instance after changing ssh's port

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Description
  ===
  Consider of security, default port(22) of sshd maybe be changed.
  After it was changed, the resize of instance got error.

  Steps to reproduce
  ==
  * Modify the /etc/ssh/sshd_config, 'Port 22022',and restart sshd;
  * Resize one instance

  Expected result
  ===
  Resize successfully

  Actual result
  =
  Resize fails

  Environment
  ===
  1. Libvirt + KVM
  2. OpenStack Mitaka
  # rpm -qa | grep nova
  openstack-nova-conductor-13.1.2-1.el7.noarch
  openstack-nova-api-13.1.2-1.el7.noarch
  python-nova-13.1.2-1.el7.noarch
  openstack-nova-novncproxy-13.1.2-1.el7.noarch
  openstack-nova-cert-13.1.2-1.el7.noarch
  openstack-nova-scheduler-13.1.2-1.el7.noarch
  python2-novaclient-3.3.2-1.el7.noarch
  openstack-nova-common-13.1.2-1.el7.noarch
  openstack-nova-console-13.1.2-1.el7.noarch

  Logs & Configs
  ==
  2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher 
ResizeError: Resize error: not able to execute ssh command: Unexpected error 
while running command.
  2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Command: 
ssh -o BatchMode=yes 172.23.30.7 mkdir -p 
/var/lib/nova/instances/67c23674-d6e9-40a2-95f0-5aa521074ff7
  2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Exit code: 
255
  2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Stdout: u''
  2017-06-13 00:46:35.807 14424 ERROR oslo_messaging.rpc.dispatcher Stderr: 
u'ssh: connect to host 172.23.30.7 port 22: Connection refused\r\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697833] Re: func: neutron.agent.linux.utils.ProcessExecutionError: Exit code: 255; Stdin: ; Stdout: ; Stderr: Unable to create lock file /run/ebtables.lock.

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/474063
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2e7b787f0e60d3707fe4dec7f8acbc90cf62ea29
Submitter: Jenkins
Branch:master

commit 2e7b787f0e60d3707fe4dec7f8acbc90cf62ea29
Author: Kevin Benton 
Date:   Tue Jun 13 22:33:24 2017 -0700

Retry ebtables lock acquisition failures

It seems after the merge of
https://bugs.launchpad.net/ubuntu/+source/ebtables/+bug/1645324
that ebtables can fail to acquire a lock and bail with an error
255. This adds some retry logic to retry it up to 10 times to
work around this issue.

Closes-Bug: #1697833
Change-Id: Ic9dcf4b236a93e8811413c6ce2c4b82602544c6d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697833

Title:
  func: neutron.agent.linux.utils.ProcessExecutionError: Exit code: 255;
  Stdin: ; Stdout: ; Stderr: Unable to create lock file
  /run/ebtables.lock.

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/21/473721/2/gate/gate-neutron-dsvm-
  functional-ubuntu-xenial/788a8a8/testr_results.html.gz

  
neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase
  test_arp_correct_protection

  traceback-1: {{{
  Traceback (most recent call last):
File 
"neutron/tests/functional/agent/linux/test_linuxbridge_arp_protect.py", line 
44, in _ensure_rules_cleaned
  self.assertEqual([], rules, 'Test leaked ebtables rules')
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = []
  actual= [u'-i test-veth0d943e -j neutronMAC-test-veth0d943e',
   u'-i test-veth0d943e --among-src fa:16:3e:7f:c8:97, -j RETURN ',
   u'-p ARP --arp-ip-src 192.168.0.1 -j ACCEPT ']
  : Test leaked ebtables rules
  }}}

  Traceback (most recent call last):
File "neutron/tests/base.py", line 118, in func
  return f(self, *args, **kwargs)
File 
"neutron/tests/functional/agent/linux/test_linuxbridge_arp_protect.py", line 
62, in test_arp_correct_protection
  self._add_arp_protection(self.source, [self.source.ip])
File 
"neutron/tests/functional/agent/linux/test_linuxbridge_arp_protect.py", line 
53, in _add_arp_protection
  arp_protect.setup_arp_spoofing_protection(name, port_dict)
File "neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py", line 
57, in setup_arp_spoofing_protection
  install_arp_spoofing_protection(vif, addresses, current_rules)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py",
 line 271, in inner
  return f(*args, **kwargs)
File "neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py", line 
114, in install_arp_spoofing_protection
  vif_chain, '-p', 'ARP'])
File "neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py", line 
194, in ebtables
  return execute(['ebtables', '--concurrent'] + comm, run_as_root=True)
File "neutron/agent/linux/ip_lib.py", line 900, in execute
  log_fail_as_error=log_fail_as_error, **kwargs)
File "neutron/agent/linux/utils.py", line 151, in execute
  raise ProcessExecutionError(msg, returncode=returncode)
  neutron.agent.linux.utils.ProcessExecutionError: Exit code: 255; Stdin: ; 
Stdout: ; Stderr: Unable to create lock file /run/ebtables.lock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697877] Re: make rt track live-migration

2017-06-14 Thread Sean Dague
This is really a much deeper issue that can't be handled just as a bug

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697877

Title:
  make rt track live-migration

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  when a live-migration in process, the resource tracker in dest don't
  take it into account, which means resource in dest won't be occupied
  by the live-migration.

  Steps to reproduce
  ==
  1. nova hypervisor-show HyperName
  2. nova  live-migration  --block-migrate 15ef4dc6-0b6d-4ce0-8ffe-6e8d838639be 
HostName
  3. nova hypervisor-show HyperName (when this live-migration(step 2) is still 
running, but after CONF.update_resources_interval seconds)

  Expected result
  ===
  free_ram_mb_new == free_ram_mb_old - instance.flavor.memory_mb

  Actual result
  =
  free_ram_mb_new == free_ram_mb_old

  Environment
  ===
  $rpm -qa | grep nova
  python-novaclient-2.30.1-1.el7.noarch
  openstack-nova-conductor-12.0.0-1.el7.centos.noarch
  python-nova-12.0.0-1.el7.centos.noarch
  openstack-nova-scheduler-12.0.0-1.el7.centos.noarch
  openstack-nova-novncproxy-12.0.0-1.el7.centos.noarch
  openstack-nova-api-12.0.0-1.el7.centos.noarch
  openstack-nova-common-12.0.0-1.el7.centos.noarch
  openstack-nova-cert-12.0.0-1.el7.centos.noarch
  openstack-nova-console-12.0.0-1.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639293] Re: Cinder encrypted vol connection info include full nova internal class name

2017-06-14 Thread Lee Yarwood
** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: os-brick
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** No longer affects: nova

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: os-brick
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639293

Title:
  Cinder encrypted vol connection info include full nova internal class
  name

Status in Cinder:
  New
Status in os-brick:
  Confirmed
Status in tempest:
  Confirmed

Bug description:
  When making an API call to Cinder to get the volume encryption
  metadata:

  2016-10-25 05:30:47.048 6699 DEBUG cinderclient.v2.client [req-
  7fe1959f-36f6-4dd6-bf47-f5e550d59530 3fdb4fc839fc44a9a99006d3fb75ac4d
  c6b14fd5fffa48aa8eb869f30c80e409 - - -] REQ: curl -g -i -X GET
  
http://192.168.1.13:8776/v2/c6b14fd5fffa48aa8eb869f30c80e409/volumes/9439e922-1051-4d83-87c7-172689ac29da/encryption
  -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H
  "X-Auth-Token: {SHA1}4ff393589c57548e39cca7b5ed99d8a42d1ac7fa"
  _http_log_request /usr/lib/python2.7/site-
  packages/keystoneauth1/session.py:337

  The reply from cinder includes a fully qualified name of a nova
  private class:

  2016-10-25 05:30:47.100 6699 DEBUG cinderclient.v2.client 
[req-7fe1959f-36f6-4dd6-bf47-f5e550d59530 3fdb4fc839fc44a9a99006d3fb75ac4d 
c6b14fd5fffa48aa8eb869f30c80e409 - - -] RESP: [200] X-Compute-Request-Id: 
req-4e32d4e4-3c02-4fdf-8179-7c9825f446e3 Content-Type: application/json 
Content-Length: 209 X-Openstack-Request-Id: 
req-4e32d4e4-3c02-4fdf-8179-7c9825f446e3 Date: Tue, 25 Oct 2016 09:30:47 GMT 
Connection: keep-alive 
  RESP BODY: {"cipher": "aes-xts-plain64", "encryption_key_id": 
"----", "provider": 
"nova.volume.encryptors.cryptsetup.CryptsetupEncryptor", "key_size": 256, 
"control_location": "front-end"}
   _http_log_response 
/usr/lib/python2.7/site-packages/keystoneauth1/session.py:366

  
  THis is very bad for a number of reasons

   - If nova renames its classes, existing encryption breaks because the classs 
names no longer match what cinder is sending
   - It allows out of tree extensions to nova for different encryption impls, 
which consume Nova private data structures in method calls. THis is against 
Nova policy - all such other extension points have been deprected, then removed.
   - If nova wants to implement encryption in a different way (eg by delegating 
to QEMU), then the concept of an encryptor class does not even apply.

  
  This is actually even worse than Cinder merely passing class names across. 
Cinder in fact exposes this in its public REST API to tenant users, letting 
tenants specify arbitrary encryptor classname for nova to use:

  http://docs.openstack.org/mitaka/config-reference/block-storage
  /volume-encryption.html

  $ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 \
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor

  
  The idea of having the tenant user specify arbitrary nova private class names 
needs to be removed entirely. Instead we should have an enum of encryption 
*formats*. Any given format may be implemented by Nova in a variety of ways. 
Nova will look at the format and decide which encryptor class to use (if any), 
or decide how to configure QEMU natively to use that format.

  For back compat we can't drop use of class names immediately, so we'll
  need a deprecation period.

  In Ocata:

   - Cinder and Nova should allow an encryption format enum to be used
  in the 'provider' field instead of a class name. The format would be
  one of

   'plain'  - corresponds to CryptsetupEncryptor
   'luks'   - corresponds to LukEncryptor

 This would be the preferred approach going forward

   - Nova should issue a warning if it receives a 'provider' class name
  that does not correspond to an existing in-tree encryptor class

   - Cinder should re-write class names to the format enum for the
  built-in classes - out of tree classnames should be left alone.


  In Pike

   - Cinder should continue re-writing class names to enums for in-tree
  classes, but reject out of tree class names with fatal error

   - The cinder v3 should have a microversion added to indicate the
  point at which 'provider' will be strictly validated against the
  'enum'.

   - Nova should raise an error if it receives a 'provider' class name
  that does not correspond to an existing in-tree encryptor class

  
  In Qxxx

   - Nova will stop accepting class names from cinder entirely - cinder
  should exclusively be reporting the format enum to Nova, rewriting
  legacy data if needed.

To 

[Yahoo-eng-team] [Bug 1697925] [NEW] as a tenant user not able to create a subnetpool associating with a shared address scope

2017-06-14 Thread Ashok kumaran B
Public bug reported:

1. created a "--shared" addresscope from admin tenant . 
2. From a user tenant, as a normal user I am trying to create a subnetpool 
associating with the addresscope created in step 1.

this operation fails ,

root@controller01:~# neutron address-scope-show 
cc21d772-a9b4-41c4-b48d-16c19bb828c6
++--+
| Field  | Value|
++--+
| id | cc21d772-a9b4-41c4-b48d-16c19bb828c6 |
| ip_version | 4|
| name   | test-addr-scope  |
| project_id | f3e6f186351c441ea49c74e24360289c |
|> shared | True |
| tenant_id  | f3e6f186351c441ea49c74e24360289c |
++--+

root@controller01:~# neutron subnetpool-create  --pool-prefix 172.80.0.0/16 
--default-prefixlen 24 selfservice --address-scope test-addr-scope
Illegal subnetpool association: subnetpool  cannot be associated with address 
scope cc21d772-a9b4-41c4-b48d-16c19bb828c6.
Neutron server returns request_ids: ['req-2be55467-9e18-47a6-a9c2-9c3fe39f7190']

But as far as I understand we do support shared addresscopes. 
Please let me know if you want more info

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697925

Title:
  as a tenant user not able to create a subnetpool associating with a
  shared address scope

Status in neutron:
  New

Bug description:
  1. created a "--shared" addresscope from admin tenant . 
  2. From a user tenant, as a normal user I am trying to create a subnetpool 
associating with the addresscope created in step 1.

  this operation fails ,

  root@controller01:~# neutron address-scope-show 
cc21d772-a9b4-41c4-b48d-16c19bb828c6
  ++--+
  | Field  | Value|
  ++--+
  | id | cc21d772-a9b4-41c4-b48d-16c19bb828c6 |
  | ip_version | 4|
  | name   | test-addr-scope  |
  | project_id | f3e6f186351c441ea49c74e24360289c |
  |> shared | True |
  | tenant_id  | f3e6f186351c441ea49c74e24360289c |
  ++--+

  root@controller01:~# neutron subnetpool-create  --pool-prefix 172.80.0.0/16 
--default-prefixlen 24 selfservice --address-scope test-addr-scope
  Illegal subnetpool association: subnetpool  cannot be associated with address 
scope cc21d772-a9b4-41c4-b48d-16c19bb828c6.
  Neutron server returns request_ids: 
['req-2be55467-9e18-47a6-a9c2-9c3fe39f7190']

  But as far as I understand we do support shared addresscopes. 
  Please let me know if you want more info

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697926] [NEW] linuxbridge ensure_bridge report errror

2017-06-14 Thread wlfightup
Public bug reported:

2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py",
 line 453, in daemon_loop
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py",
 line 210, in process_network_devices
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent resync_a = 
self.treat_devices_added_updated(devices_added_updated)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py",
 line 227, in treat_devices_added_updated
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._process_device_if_exists(device_details)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py",
 line 254, in _process_device_if_exists
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent device, 
device_details['device_owner'])
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 504, in plug_interface
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent tap_name, device_owner)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 453, in add_tap_interface
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return False
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent self.force_reraise()
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent six.reraise(self.type_, 
self.value, self.tb)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 445, in add_tap_interface
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent tap_device_name, 
device_owner)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 476, in _add_tap_interface
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent segmentation_id):
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 429, in ensure_physical_in_bridge
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent physical_interface)
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 245, in ensure_flat_bridge
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent gateway):
2017-06-14 05:00:13.747 16708 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 388, in ensure_bridge
2017-06-14 05:00:13.747 16708 ERROR 

[Yahoo-eng-team] [Bug 1615715] Re: ip6tables-restore fails in neutron_openvswitch_agent

2017-06-14 Thread Vladislav Belogrudov
** Project changed: neutron => kolla-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615715

Title:
  ip6tables-restore fails in neutron_openvswitch_agent

Status in kolla-ansible:
  Confirmed

Bug description:
  2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 
'ip6tables-restore', '-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

  Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
 [ --binary ]
 [ --counters ]
 [ --verbose ]
 [ --test ]
 [ --help ]
 [ --noflush ]
[ --modprobe=]

  It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option
  used in the command above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1615715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492254] Re: neutron should not try to bind port on compute with hypervisor_type ironic

2017-06-14 Thread Kevin Benton
+1 to @Sam's suggestion. Neutron is responsible for binding all ports,
including Ironic ones. Add an ML2 driver to bind your ports.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492254

Title:
  neutron should not try to bind port on compute with hypervisor_type
  ironic

Status in neutron:
  Won't Fix

Bug description:
  Neutron tries to bind port on compute where instance is launched.  It
  doesn't make sense when hypervisor_type is ironic, since VM  does not
  live on hypervisor in this case.  Furthermore it leads to failed
  provisioning of baremetal node, when neutron is not configured on
  ironic compute node.

  Setup:
  node-1: controller
  node-2: ironic-compute without neutron

  neutron-server.log: http://paste.openstack.org/show/445388/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615715] Re: ip6tables-restore fails

2017-06-14 Thread Vladislav Belogrudov
this bug still happens. Neutron openvswitch agent container tries to run
ip6tables-restore and fails because there is no ip6table_filter module
loaded. The module normally is loaded by the command itself. But inside
the container we don't provide /lib/modules ... With proper host mount
the error is gone.

** Changed in: neutron
   Status: Invalid => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Vladislav Belogrudov (vlad-belogrudov)

** Summary changed:

- ip6tables-restore fails 
+ ip6tables-restore fails in neutron_openvswitch_agent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615715

Title:
  ip6tables-restore fails in neutron_openvswitch_agent

Status in neutron:
  Confirmed

Bug description:
  2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 
'ip6tables-restore', '-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

  Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
 [ --binary ]
 [ --counters ]
 [ --verbose ]
 [ --test ]
 [ --help ]
 [ --noflush ]
[ --modprobe=]

  It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option
  used in the command above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501808] Re: Enabling soft-deletes opens a DOS on compute hosts

2017-06-14 Thread Sean Dague
This is really a design decision, it's not really clear that changing
the expected behavior here is going to provide a good experience for
operators. We punt on various classes of potential DOS (like api rate
limiting).

** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova
   Importance: High => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501808

Title:
  Enabling soft-deletes opens a DOS on compute hosts

Status in OpenStack Compute (nova):
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  If the user sets reclaim_instance_interval to anything other than 0,
  then when a user requests an instance delete, it will instead be soft
  deleted. Soft delete explicitly releases the user's quota, but does
  not release the instance's resources until period task
  _reclaim_queued_deletes runs with a period of
  reclaim_instance_interval seconds.

  A malicious authenticated user can repeatedly create and delete
  instances without limit, which will consume resources on the host
  without consuming their quota. If done quickly enough, this will
  exhaust host resources.

  I'm not entirely sure what to suggest in remediation, as this seems to
  be a deliberate design. The most obvious fix would be to not release
  quota until the instance is reaped, but that would be a significant
  change in behaviour.

  This is very similar to https://bugs.launchpad.net/bugs/cve/2015-3280
  , except that we do it deliberately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675504] Re: openstack_dashboard.usage.quotas.tenant_quota_usages fetches to many quotas and degrades performances

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/456416
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=359467b4013bb4f89a6a1309e6eda89459288986
Submitter: Jenkins
Branch:master

commit 359467b4013bb4f89a6a1309e6eda89459288986
Author: Akihiro Motoki 
Date:   Wed Apr 12 18:10:20 2017 +

Retrieve quota and usage only for resources really required

tenant_quota_usage() is used to retrieve quota and usage
to determine if a resource can be created.
However, tenant_quota_usage retrieves quota and usage for
all resources and it can be a performance problem.

This commit allows to load quota and usage only for resources
which are actually required.

Closes-Bug: #1675504
Change-Id: Iab7322a337a451a1a040cc2f4b55cc319b1ffc4c


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1675504

Title:
  openstack_dashboard.usage.quotas.tenant_quota_usages fetches to many
  quotas and degrades performances

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When looking at the keypair dashboard, or router dashboard, etc...
  openstack_dashboard.usage.quotas.tenant_quota_usages queries also
  cinder, etc... and therefore makes tons of unused and unnecessary api
  calls.

  It slows down panels very heavily.

  I did a quick and dirty test on some dashboards (with a local horizon
  targeted on a production environment's APIs) by modifying the
  tenant_quota_usages function to retrieve only the desired quotas and
  usages for the page I need, and it turns out that rendering a page is
  between 3 to five time faster. Just enormous.

  We really do not need to fetch all usages and quotas when retrieving
  keypairs, etc...

  This function should accept extra parameters to get only the desired
  quotas and usages.

  However this is a huge task because it requires modifying ALL
  dashboards.

  OPENSTACK_HYPERVISOR_FEATURES['enable_quotas'] and
  OPENSTACK_NEUTRON_NETWORK['enable_quotas'] set to False is not a
  solution because it prevents fetching these quotas whatever the
  dashboard.

  The solution is e.g. not fetching network quotas and usages on the
  volumes dashboard or cinder quotas and usages on the keypairs
  dashboard, etc...

  Quick tests show that the performance impact is simply more than
  tremendously gigantic. e.g.: My "routers" page with the current
  function takes 20 to 30 seconds to render (just catastrophic), and
  only 3 to 5 seconds If I get only the the desired quotas and usages
  required only for this dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1675504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655182] Re: keystone-manage mapping_engine tester problems

2017-06-14 Thread Edward Hope-Morley
** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1655182

Title:
  keystone-manage mapping_engine tester problems

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * A bug in keystone-manage tool prohibits the use of the
  mapping_engine command for testing federation rules.

   * Users of Keystone Federation will not be able to verify their
  mapping rules before pushing these to production.

   * Not being able to test rules before pushing to production is a
  major operational challenge for our users.

   * The proposed upload fixes this by backporting a fix for this issue
  from upstream stable/ocata.

  [Test Case]

   * Deploy keystone using Juju with this bundle:
 http://pastebin.ubuntu.com/24855409/

   * ssh to keystone unit, grab artifacts and run command:
 - mapping.json: http://pastebin.ubuntu.com/24855419/
 - input.txt: http://pastebin.ubuntu.com/24855420/
 - command:
 'keystone-manage mapping_engine --rules mapping.json --input input.txt'

   * Observe that command provides no output and that a Python Traceback
  is printed in /var/log/keystone/keystone.log

   * Install the proposed package, repeat the above steps and observe
  that the command now outputs its interpretation and effect of the
  rules.

  [Regression Potential]

   * keystone-manage mapping_engine is a operational test tool and is
  solely used by the operator to test their rules.

   * The distributed version of this command in Xenial and Yakkety does
  currently not work at all.

   * The change will make the command work as our users expect it to.

  [Original bug description]
  There are several problems with keystone-manage mapping_engine

  * It aborts with a backtrace because of wrong number of arguments
    passed to the RuleProcessor

  * The --engine-debug option does not work.

  * Error messages related to input data are cryptic and inprecise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1655182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655182] Re: keystone-manage mapping_engine tester problems

2017-06-14 Thread Frode Nordahl
** Description changed:

+ [Impact]
+ 
+  * A bug in keystone-manage tool prohibits the use of the mapping_engine
+ command for testing federation rules.
+ 
+  * Users of Keystone Federation will not be able to verify their mapping
+ rules before pushing these to production.
+ 
+  * Not being able to test rules before pushing to production is a major
+ operational challenge for our users.
+ 
+  * The proposed upload fixes this by backporting a fix for this issue
+ from upstream stable/ocata.
+ 
+ [Test Case]
+ 
+  * Deploy keystone using Juju with this bundle:
+http://pastebin.ubuntu.com/24855409/
+ 
+  * ssh to keystone unit, grab artifacts and run command:
+- mapping.json: http://pastebin.ubuntu.com/24855419/
+- input.txt: http://pastebin.ubuntu.com/24855420/
+- command:
+'keystone-manage mapping_engine --rules mapping.json --input input.txt'
+ 
+  * Observe that command provides no output and that a Python Traceback
+ is printed in /var/log/keystone/keystone.log
+ 
+  * Install the proposed package, repeat the above steps and observe that
+ the command now outputs its interpretation and effect of the rules.
+ 
+ [Regression Potential]
+ 
+  * keystone-manage mapping_engine is a operational test tool and is
+ solely used by the operator to test their rules.
+ 
+  * The distributed version of this command in Xenial and Yakkety does
+ currently not work at all.
+ 
+  * The change will make the command work as our users expect it to.
+ 
+ [Original bug description]
  There are several problems with keystone-manage mapping_engine
- 
+ 
  * It aborts with a backtrace because of wrong number of arguments
-   passed to the RuleProcessor
- 
+   passed to the RuleProcessor
+ 
  * The --engine-debug option does not work.
- 
+ 
  * Error messages related to input data are cryptic and inprecise.

** Tags added: sts-sru-needed

** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: keystone (Ubuntu)
   Status: New => Fix Released

** Patch added: "keystone-yakkety.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1655182/+attachment/4895734/+files/keystone-yakkety.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1655182

Title:
  keystone-manage mapping_engine tester problems

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * A bug in keystone-manage tool prohibits the use of the
  mapping_engine command for testing federation rules.

   * Users of Keystone Federation will not be able to verify their
  mapping rules before pushing these to production.

   * Not being able to test rules before pushing to production is a
  major operational challenge for our users.

   * The proposed upload fixes this by backporting a fix for this issue
  from upstream stable/ocata.

  [Test Case]

   * Deploy keystone using Juju with this bundle:
 http://pastebin.ubuntu.com/24855409/

   * ssh to keystone unit, grab artifacts and run command:
 - mapping.json: http://pastebin.ubuntu.com/24855419/
 - input.txt: http://pastebin.ubuntu.com/24855420/
 - command:
 'keystone-manage mapping_engine --rules mapping.json --input input.txt'

   * Observe that command provides no output and that a Python Traceback
  is printed in /var/log/keystone/keystone.log

   * Install the proposed package, repeat the above steps and observe
  that the command now outputs its interpretation and effect of the
  rules.

  [Regression Potential]

   * keystone-manage mapping_engine is a operational test tool and is
  solely used by the operator to test their rules.

   * The distributed version of this command in Xenial and Yakkety does
  currently not work at all.

   * The change will make the command work as our users expect it to.

  [Original bug description]
  There are several problems with keystone-manage mapping_engine

  * It aborts with a backtrace because of wrong number of arguments
    passed to the RuleProcessor

  * The --engine-debug option does not work.

  * Error messages related to input data are cryptic and inprecise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1655182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677730] Re: keystone-manage mapping_engine is broken: TypeError

2017-06-14 Thread Edward Hope-Morley
** No longer affects: cloud-archive

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1677730

Title:
  keystone-manage mapping_engine is broken: TypeError

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  Fix Released

Bug description:
  Running `keystone-manage mapping_engine` (with parameters) is broken:

  2017-03-30 16:09:11.982 13513 CRITICAL keystone [-] TypeError: __init__() 
takes exactly 3 arguments (2 given)
  2017-03-30 16:09:11.982 13513 ERROR keystone Traceback (most recent call 
last):
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2017-03-30 16:09:11.982 13513 ERROR keystone sys.exit(main())
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1270, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone CONF.command.cmd_class.main()
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1143, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone rp = 
mapping_engine.RuleProcessor(rules['rules'])
  2017-03-30 16:09:11.982 13513 ERROR keystone TypeError: __init__() takes 
exactly 3 arguments (2 given)

  Affects mitaka, newton and ocata stable (centos-release).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1677730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677730] Re: keystone-manage mapping_engine is broken: TypeError

2017-06-14 Thread Frode Nordahl
Sorry about the noise in this bug, further process is tracked in bug
1655182

** Tags removed: sts sts-sru-needed

** No longer affects: keystone (Ubuntu)

** Patch removed: "keystone-yakkety.debdiff"
   
https://bugs.launchpad.net/keystone/+bug/1677730/+attachment/4895675/+files/keystone-yakkety.debdiff

** Patch removed: "keystone-xenial.debdiff"
   
https://bugs.launchpad.net/keystone/+bug/1677730/+attachment/4895676/+files/keystone-xenial.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1677730

Title:
  keystone-manage mapping_engine is broken: TypeError

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  Fix Released

Bug description:
  Running `keystone-manage mapping_engine` (with parameters) is broken:

  2017-03-30 16:09:11.982 13513 CRITICAL keystone [-] TypeError: __init__() 
takes exactly 3 arguments (2 given)
  2017-03-30 16:09:11.982 13513 ERROR keystone Traceback (most recent call 
last):
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2017-03-30 16:09:11.982 13513 ERROR keystone sys.exit(main())
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1270, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone CONF.command.cmd_class.main()
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1143, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone rp = 
mapping_engine.RuleProcessor(rules['rules'])
  2017-03-30 16:09:11.982 13513 ERROR keystone TypeError: __init__() takes 
exactly 3 arguments (2 given)

  Affects mitaka, newton and ocata stable (centos-release).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1677730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677730] Re: keystone-manage mapping_engine is broken: TypeError

2017-06-14 Thread Edward Hope-Morley
** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: keystone/ocata
   Status: Invalid => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1677730

Title:
  keystone-manage mapping_engine is broken: TypeError

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * A bug in keystone-manage tool prohibits the use of the
  mapping_engine command for testing federation rules.

   * Users of Keystone Federation will not be able to verify their
  mapping rules before pushing these to production.

   * Not being able to test rules before pushing to production is a
  major operational challenge for our users.

   * The proposed upload fixes this by backporting a fix for this issue
  from upstream stable/ocata.

  [Test Case]

   * Deploy keystone using Juju with this bundle:
 http://pastebin.ubuntu.com/24855409/

  
   * ssh to keystone unit, grab artifacts and run command:
 - mapping.json: http://pastebin.ubuntu.com/24855419/
 - input.txt: http://pastebin.ubuntu.com/24855420/
 - command:
 'keystone-manage mapping_engine --rules mapping.json --input input.txt'

   * Observe that command provides no output and that a Python Traceback
  is printed in /var/log/keystone/keystone.log

   * Install the proposed package, repeat the above steps and observe
  that the command now outputs its interpretation and effect of the
  rules.

  [Regression Potential]

   * keystone-manage mapping_engine is a operational test tool and is
  solely used by the operator to test their rules.

   * The distributed version of this command in Xenial and Yakkety does
  currently not work at all.

   * The change will make the command work as our users expect it to.

  [Original bug description]
  Running `keystone-manage mapping_engine` (with parameters) is broken:

  2017-03-30 16:09:11.982 13513 CRITICAL keystone [-] TypeError: __init__() 
takes exactly 3 arguments (2 given)
  2017-03-30 16:09:11.982 13513 ERROR keystone Traceback (most recent call 
last):
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2017-03-30 16:09:11.982 13513 ERROR keystone sys.exit(main())
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1270, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone CONF.command.cmd_class.main()
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1143, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone rp = 
mapping_engine.RuleProcessor(rules['rules'])
  2017-03-30 16:09:11.982 13513 ERROR keystone TypeError: __init__() takes 
exactly 3 arguments (2 given)

  Affects mitaka, newton and ocata stable (centos-release).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1677730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677730] Re: keystone-manage mapping_engine is broken: TypeError

2017-06-14 Thread Frode Nordahl
** Tags added: sts sts-sru-needed

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1677730

Title:
  keystone-manage mapping_engine is broken: TypeError

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * A bug in keystone-manage tool prohibits the use of the
  mapping_engine command for testing federation rules.

   * Users of Keystone Federation will not be able to verify their
  mapping rules before pushing these to production.

   * Not being able to test rules before pushing to production is a
  major operational challenge for our users.

   * The proposed upload fixes this by backporting a fix for this issue
  from upstream stable/ocata.

  [Test Case]

   * Deploy keystone using Juju with this bundle:
 http://pastebin.ubuntu.com/24855409/

  
   * ssh to keystone unit, grab artifacts and run command:
 - mapping.json: http://pastebin.ubuntu.com/24855419/
 - input.txt: http://pastebin.ubuntu.com/24855420/
 - command:
 'keystone-manage mapping_engine --rules mapping.json --input input.txt'

   * Observe that command provides no output and that a Python Traceback
  is printed in /var/log/keystone/keystone.log

   * Install the proposed package, repeat the above steps and observe
  that the command now outputs its interpretation and effect of the
  rules.

  [Regression Potential]

   * keystone-manage mapping_engine is a operational test tool and is
  solely used by the operator to test their rules.

   * The distributed version of this command in Xenial and Yakkety does
  currently not work at all.

   * The change will make the command work as our users expect it to.

  [Original bug description]
  Running `keystone-manage mapping_engine` (with parameters) is broken:

  2017-03-30 16:09:11.982 13513 CRITICAL keystone [-] TypeError: __init__() 
takes exactly 3 arguments (2 given)
  2017-03-30 16:09:11.982 13513 ERROR keystone Traceback (most recent call 
last):
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2017-03-30 16:09:11.982 13513 ERROR keystone sys.exit(main())
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1270, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone CONF.command.cmd_class.main()
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1143, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone rp = 
mapping_engine.RuleProcessor(rules['rules'])
  2017-03-30 16:09:11.982 13513 ERROR keystone TypeError: __init__() takes 
exactly 3 arguments (2 given)

  Affects mitaka, newton and ocata stable (centos-release).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1677730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677730] Re: keystone-manage mapping_engine is broken: TypeError

2017-06-14 Thread Frode Nordahl
** Description changed:

+ [Impact]
+ 
+  * A bug in keystone-manage tool prohibits the use of the mapping_engine
+ command for testing federation rules.
+ 
+  * Users of Keystone Federation will not be able to verify their mapping
+ rules before pushing these to production.
+ 
+  * Not being able to test rules before pushing to production is a major
+ operational challenge for our users.
+ 
+  * The proposed upload fixes this by backporting a fix for this issue
+ from upstream stable/ocata.
+ 
+ [Test Case]
+ 
+  * Deploy keystone using Juju with this bundle:
+http://pastebin.ubuntu.com/24855409/
+ 
+ 
+  * ssh to keystone unit, grab artifacts and run command:
+- mapping.json: http://pastebin.ubuntu.com/24855419/
+- input.txt: http://pastebin.ubuntu.com/24855420/
+- command:
+'keystone-manage mapping_engine --rules mapping.json --input input.txt'
+ 
+  * Observe that command provides no output and that a Python Traceback
+ is printed in /var/log/keystone/keystone.log
+ 
+  * Install the proposed package, repeat the above steps and observe that
+ the command now outputs its interpretation and effect of the rules.
+ 
+ [Regression Potential]
+ 
+  * keystone-manage mapping_engine is a operational test tool and is
+ solely used by the operator to test their rules.
+ 
+  * The distributed version of this command in Xenial and Yakkety does
+ currently not work at all.
+ 
+  * The change will make the command work as our users expect it to.
+ 
+ [Original bug description]
  Running `keystone-manage mapping_engine` (with parameters) is broken:
  
  2017-03-30 16:09:11.982 13513 CRITICAL keystone [-] TypeError: __init__() 
takes exactly 3 arguments (2 given)
  2017-03-30 16:09:11.982 13513 ERROR keystone Traceback (most recent call 
last):
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2017-03-30 16:09:11.982 13513 ERROR keystone sys.exit(main())
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1270, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone CONF.command.cmd_class.main()
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1143, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone rp = 
mapping_engine.RuleProcessor(rules['rules'])
  2017-03-30 16:09:11.982 13513 ERROR keystone TypeError: __init__() takes 
exactly 3 arguments (2 given)
  
  Affects mitaka, newton and ocata stable (centos-release).

** Changed in: keystone (Ubuntu)
   Status: New => Fix Released

** Patch added: "keystone-yakkety.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1677730/+attachment/4895675/+files/keystone-yakkety.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1677730

Title:
  keystone-manage mapping_engine is broken: TypeError

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  Invalid
Status in keystone package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * A bug in keystone-manage tool prohibits the use of the
  mapping_engine command for testing federation rules.

   * Users of Keystone Federation will not be able to verify their
  mapping rules before pushing these to production.

   * Not being able to test rules before pushing to production is a
  major operational challenge for our users.

   * The proposed upload fixes this by backporting a fix for this issue
  from upstream stable/ocata.

  [Test Case]

   * Deploy keystone using Juju with this bundle:
 http://pastebin.ubuntu.com/24855409/

  
   * ssh to keystone unit, grab artifacts and run command:
 - mapping.json: http://pastebin.ubuntu.com/24855419/
 - input.txt: http://pastebin.ubuntu.com/24855420/
 - command:
 'keystone-manage mapping_engine --rules mapping.json --input input.txt'

   * Observe that command provides no output and that a Python Traceback
  is printed in /var/log/keystone/keystone.log

   * Install the proposed package, repeat the above steps and observe
  that the command now outputs its interpretation and effect of the
  rules.

  [Regression Potential]

   * keystone-manage mapping_engine is a operational test tool and is
  solely used by the operator to test their rules.

   * The distributed version of this command in Xenial and Yakkety does
  currently not work at all.

   * The change will make the command work as our users expect it to.

  [Original bug description]
  

[Yahoo-eng-team] [Bug 1677730] Re: keystone-manage mapping_engine is broken: TypeError

2017-06-14 Thread Frode Nordahl
** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1677730

Title:
  keystone-manage mapping_engine is broken: TypeError

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  Invalid
Status in keystone package in Ubuntu:
  New

Bug description:
  Running `keystone-manage mapping_engine` (with parameters) is broken:

  2017-03-30 16:09:11.982 13513 CRITICAL keystone [-] TypeError: __init__() 
takes exactly 3 arguments (2 given)
  2017-03-30 16:09:11.982 13513 ERROR keystone Traceback (most recent call 
last):
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2017-03-30 16:09:11.982 13513 ERROR keystone sys.exit(main())
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1270, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone CONF.command.cmd_class.main()
  2017-03-30 16:09:11.982 13513 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1143, in main
  2017-03-30 16:09:11.982 13513 ERROR keystone rp = 
mapping_engine.RuleProcessor(rules['rules'])
  2017-03-30 16:09:11.982 13513 ERROR keystone TypeError: __init__() takes 
exactly 3 arguments (2 given)

  Affects mitaka, newton and ocata stable (centos-release).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1677730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697613] Re: libffi-dev is missing in bindep.txt

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/473721
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8e8f721f6ff615bf43f8563e8940e4d46388c0e7
Submitter: Jenkins
Branch:master

commit 8e8f721f6ff615bf43f8563e8940e4d46388c0e7
Author: Akihiro Motoki 
Date:   Tue Jun 13 07:29:39 2017 +

Add libffi-dev to bindep.txt

libffi-dev is required to install cffi and PyNaCL from tarballs.
cffi is installed from requirements.txt, so 'test' profile is not
specified.

We usual use wheel packages when installing python packages,
but tarball is sometimes used, for example, when a new version
is uploaded. I think it is worth adding it to bindep.txt
to avoid accidental gate failure.

Closes-Bug: #1697613
Change-Id: I4800c9f213fa5c8f28c8603e022264e6aa139090


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697613

Title:
  libffi-dev is missing in bindep.txt

Status in neutron:
  Fix Released

Bug description:
  When installing PyNaCL (required via paramiko by tempest) from
  tarball, libffi-dev is required. When a wheel package is not
  available, the installation fails.

  http://logs.openstack.org/93/473393/1/check/gate-neutron-pep8-ubuntu-
  xenial/7c9aed9/console.html#_2017-06-13_00_45_02_298561

  This also happens for cffi (required by oslo.privsep).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697881] [NEW] populating neutron database to sqlite db type connection

2017-06-14 Thread Shimon Zadok
Public bug reported:


I have this issue with neutron db-manage with sqlite connection!
Ubuntu 16.04 Server with newton openstack installation

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "oslo_config.cfg"
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo_initial
INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, 
nsxv_vdr_metadata.py
INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, 
neutrodb_ipam
INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, 
Initial operations in support of address scopes
INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, 
Flavor framework
INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, 
network_rbac
INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, 
quota_usage
INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, 
subnetpool hash
INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, 
add order to dnsnameservers
INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, 
address scope support in subnetpool
INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, 
qos db changes
INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, 
quota_reservations
INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, 
Add dns_name to Port
INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, 
Add availability zone
INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, 
add is_default to subnetpool
/usr/lib/python2.7/dist-packages/alembic/util/messaging.py:69: UserWarning: 
Skipping unsupported ALTER for creation of implicit constraint
  warnings.warn(msg)
INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, 
Add standard attribute table
INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, 
Add network availability zone
INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, 
Add router availability zone
INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, 
Add ip_version to AddressScope
Traceback (most recent call last):
  File "/usr/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
750, in main
return_val |= bool(CONF.command.func(config, CONF.command.name))
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
226, in do_upgrade
desc=branch, sql=CONF.command.sql)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
127, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 174, in 
upgrade
script.run_env()
  File "/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 397, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 81, in 
load_python_file
module = load_module_py(module_id, path)
  File "/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 126, in 
run_migrations_online()
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File "/usr/lib/python2.7/dist-packages/alembic/runtime/environment.py", line 
797, in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/lib/python2.7/dist-packages/alembic/runtime/migration.py", line 
312, in run_migrations
step.migration_fn(**kw)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/mitaka/expand/c3a73f615e4_add_ip_version_to_address_scope.py",
 line 33, in upgrade
sa.Column('ip_version', sa.Integer(), nullable=False))
  File "", line 8, in add_column
  File "", line 3, in add_column
  File "/usr/lib/python2.7/dist-packages/alembic/operations/ops.py", line 1535, 
in add_column
return operations.invoke(op)
  File "/usr/lib/python2.7/dist-packages/alembic/operations/base.py", line 318, 
in invoke
return fn(self, operation)
  File 

[Yahoo-eng-team] [Bug 1697877] [NEW] make rt track live-migration

2017-06-14 Thread czy
Public bug reported:

when a live-migration in process, the resource tracker in dest don't
take it into account, which means resource in dest won't be occupied by
the live-migration.

Steps to reproduce
==
1. nova hypervisor-show HyperName
2. nova  live-migration  --block-migrate 15ef4dc6-0b6d-4ce0-8ffe-6e8d838639be 
HostName
3. nova hypervisor-show HyperName (when this live-migration(step 2) is still 
running, but after CONF.update_resources_interval seconds)

Expected result
===
free_ram_mb_new == free_ram_mb_old - instance.flavor.memory_mb

Actual result
=
free_ram_mb_new == free_ram_mb_old

Environment
===
$rpm -qa | grep nova
python-novaclient-2.30.1-1.el7.noarch
openstack-nova-conductor-12.0.0-1.el7.centos.noarch
python-nova-12.0.0-1.el7.centos.noarch
openstack-nova-scheduler-12.0.0-1.el7.centos.noarch
openstack-nova-novncproxy-12.0.0-1.el7.centos.noarch
openstack-nova-api-12.0.0-1.el7.centos.noarch
openstack-nova-common-12.0.0-1.el7.centos.noarch
openstack-nova-cert-12.0.0-1.el7.centos.noarch
openstack-nova-console-12.0.0-1.el7.centos.noarch

** Affects: nova
 Importance: Undecided
 Assignee: czy (rcmerci)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => czy (rcmerci)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697877

Title:
  make rt track live-migration

Status in OpenStack Compute (nova):
  New

Bug description:
  when a live-migration in process, the resource tracker in dest don't
  take it into account, which means resource in dest won't be occupied
  by the live-migration.

  Steps to reproduce
  ==
  1. nova hypervisor-show HyperName
  2. nova  live-migration  --block-migrate 15ef4dc6-0b6d-4ce0-8ffe-6e8d838639be 
HostName
  3. nova hypervisor-show HyperName (when this live-migration(step 2) is still 
running, but after CONF.update_resources_interval seconds)

  Expected result
  ===
  free_ram_mb_new == free_ram_mb_old - instance.flavor.memory_mb

  Actual result
  =
  free_ram_mb_new == free_ram_mb_old

  Environment
  ===
  $rpm -qa | grep nova
  python-novaclient-2.30.1-1.el7.noarch
  openstack-nova-conductor-12.0.0-1.el7.centos.noarch
  python-nova-12.0.0-1.el7.centos.noarch
  openstack-nova-scheduler-12.0.0-1.el7.centos.noarch
  openstack-nova-novncproxy-12.0.0-1.el7.centos.noarch
  openstack-nova-api-12.0.0-1.el7.centos.noarch
  openstack-nova-common-12.0.0-1.el7.centos.noarch
  openstack-nova-cert-12.0.0-1.el7.centos.noarch
  openstack-nova-console-12.0.0-1.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697873] [NEW] neutron-vpnaas failed to execute tox -e dsvm-functional-sswan

2017-06-14 Thread Li Xiao
Public bug reported:

After I installed openstack with devstack,
Execute in the neutron-vpnaas directory
Tox -e dsvm-functional-sswan
The results are as follows:
==
Totals
==
Ran: 27 tests in 7. sec.
 - Passed: 0
 - Skipped: 7
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 20
Sum of execute time for each test: 5.5638 sec.

Some error info:
 Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 289, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 326, in 
_setup_database_fixtures
self.fail(msg)
  File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 690, in 
fail
raise self.failureException(msg)
AssertionError: backend 'mysql' unavailable

Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 289, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 326, in 
_setup_database_fixtures
self.fail(msg)
  File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 690, in 
fail
raise self.failureException(msg)
AssertionError: backend 'postgresql' unavailable

Traceback (most recent call last):
  File 
"neutron_vpnaas/tests/functional/strongswan/test_strongswan_driver.py", line 
142, in setUp
self.router.router_namespace.create()
  File "/opt/stack/neutron/neutron/agent/l3/namespaces.py", line 93, in 
create
ip_wrapper = self.ip_wrapper_root.ensure_namespace(self.name)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 210, in 
ensure_namespace
if not self.netns.exists(name):
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 905, in 
exists
run_as_root=cfg.CONF.AGENT.use_helper_for_ns_read)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 109, in 
_execute
log_fail_as_error=log_fail_as_error)
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 127, in 
execute
execute_rootwrap_daemon(cmd, process_input, addl_env))
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 114, in 
execute_rootwrap_daemon
LOG.error(_LE("Rootwrap error running command: %s"), cmd)
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
self.force_reraise()
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 111, in 
execute_rootwrap_daemon
return client.execute(cmd, process_input)
  File "/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 
125, in execute
self._ensure_initialized()
  File "/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 
110, in _ensure_initialized
self._initialize()
  File "/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 80, 
in _initialize
(stderr,))
Exception: Failed to spawn rootwrap process.
stderr:
sudo: 
/opt/stack/neutron-vpnaas/.tox/dsvm-functional-sswan/bin/neutron-rootwrap-daemon:
 command not found

Why does this happen? What should I do?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697873

Title:
  neutron-vpnaas failed to execute tox -e dsvm-functional-sswan

Status in neutron:
  New

Bug description:
  After I installed openstack with devstack,
  Execute in the neutron-vpnaas directory
  Tox -e dsvm-functional-sswan
  The results are as follows:
  ==
  Totals
  ==
  Ran: 27 tests in 7. sec.
   - Passed: 0
   - Skipped: 7
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 20
  Sum of execute time for each test: 5.5638 sec.

  Some error info:
   Traceback (most recent call last):
File 
"/opt/stack/neutron/neutron/tests/functional/db/test_migrations.py", line 136, 
in setUp
  super(_TestModelsMigrations, self).setUp()
File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 289, 
in setUp
  self._setup_database_fixtures()
File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 326, 
in _setup_database_fixtures
  self.fail(msg)
File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 690, in 
fail
  raise 

[Yahoo-eng-team] [Bug 1697874] [NEW] neutron-vpnaas failed to execute tox -e dsvm-functional

2017-06-14 Thread Li Xiao
Public bug reported:

After I installed openstack with devstack,
Execute in the neutron-vpnaas directory
Tox -e dsvm-functional
The results are as follows:
==
Totals
==
Ran: 28 tests in 8. sec.
 - Passed: 0
 - Skipped: 7
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 21
Sum of execute time for each test: 13.4665 sec.


Some error info:

Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 289, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 326, in 
_setup_database_fixtures
self.fail(msg)
  File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 690, in 
fail
raise self.failureException(msg)
AssertionError: backend 'postgresql' unavailable

  Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 289, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 326, in 
_setup_database_fixtures
self.fail(msg)
  File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 690, in 
fail
raise self.failureException(msg)
AssertionError: backend 'mysql' unavailable


 Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/fixtures/fixture.py", line 197, in 
setUp
self._setUp()
  File "/opt/stack/neutron/neutron/tests/common/net_helpers.py", line 718, 
in _setUp
ovs = ovs_lib.BaseOVS()
  File "/opt/stack/neutron/neutron/agent/common/ovs_lib.py", line 109, in 
__init__
self.ovsdb = ovsdb_api.from_config(self)
  File "/opt/stack/neutron/neutron/agent/ovsdb/api.py", line 57, in 
from_config
return iface.api_factory(context)
  File "/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py", line 46, in 
api_factory
idl=connection.idl_factory(),
  File "/opt/stack/neutron/neutron/agent/ovsdb/native/connection.py", line 
43, in idl_factory
helper = do_get_schema_helper()
  File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in 
wrapped_f
return r.call(f, *args, **kw)
  File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 235, 
in call
do = self.iter(result=result, exc_info=exc_info)
  File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 205, 
in iter
raise RetryError(fut).reraise()
  File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 284, 
in reraise
raise self.last_attempt.result()
  File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 
422, in result
return self.__get_result()
  File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 238, 
in call
result = fn(*args, **kwargs)
  File "/opt/stack/neutron/neutron/agent/ovsdb/native/connection.py", line 
41, in do_get_schema_helper
return idlutils.get_schema_helper(conn, schema_name)
  File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
128, in get_schema_helper
'err': os.strerror(err)})
Exception: Could not retrieve schema from tcp:127.0.0.1:6640: Connection 
refused

Why does this happen?  What should I do?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697874

Title:
  neutron-vpnaas failed to execute tox -e dsvm-functional

Status in neutron:
  New

Bug description:
  After I installed openstack with devstack,
  Execute in the neutron-vpnaas directory
  Tox -e dsvm-functional
  The results are as follows:
  ==
  Totals
  ==
  Ran: 28 tests in 8. sec.
   - Passed: 0
   - Skipped: 7
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 21
  Sum of execute time for each test: 13.4665 sec.

  
  Some error info:

  Traceback (most recent call last):
File 
"/opt/stack/neutron/neutron/tests/functional/db/test_migrations.py", line 136, 
in setUp
  super(_TestModelsMigrations, self).setUp()
File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 289, 
in setUp
  self._setup_database_fixtures()
File "/opt/stack/neutron/neutron/tests/unit/testlib_api.py", line 326, 
in _setup_database_fixtures
  self.fail(msg)
File "/usr/lib/python2.7/site-packages/unittest2/case.py", line 690, in 
fail
  raise self.failureException(msg)
  AssertionError: 

[Yahoo-eng-team] [Bug 1692767] Re: [devstack] keystone service fails to start after reboot

2017-06-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/468457
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=d0db62a476e29355ca08db0237295139c8fce4f6
Submitter: Jenkins
Branch:master

commit d0db62a476e29355ca08db0237295139c8fce4f6
Author: Kirill Zaitsev 
Date:   Fri May 26 19:02:52 2017 +0300

Use systemd-tmpfiles to create /var/run/uwsgi

On ubuntu contents of /var/run do not persist between reboots. Devstack
uses /var/run/uwsgi as home for wsgi sockets. This means that after
rebooting the machine services, that rely on uwsgi would fail to start.
Currently it affects keystone.service and placement-api.service.
This patch changes delegates directory creation to systemd-tmpfiles,
which would run on startup.

Change-Id: I27d168cea93698739ef08ac76c828695a49176c7
Closes-Bug: #1692767


** Changed in: devstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1692767

Title:
  [devstack] keystone service fails to start after reboot

Status in devstack:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Enviroment
  OS: CentOS7.2

  I deployed the devstack environment successfully in CentOS system.
  After rebooting the CentOS system, the keystone service could not be
  started. The uwsgi raised exception as follows.

  Apr 23 22:38:53 devstack.localdomain systemd[1]: devstack@keystone.service 
failed.
  Apr 23 22:42:46 devstack.localdomain systemd[1]: Starting Devstack 
devstack@keystone.service...
  Apr 23 22:42:46 devstack.localdomain devstack@keystone.service[7242]: [uWSGI] 
getting INI configuration from /etc/keystone/keystone-uwsgi-public.ini
  Apr 23 22:42:46 devstack.localdomain devstack@keystone.service[7242]: 
open("./python_plugin.so"): No such file or directory [core/utils.c line 3686]
  Apr 23 22:42:46 devstack.localdomain devstack@keystone.service[7242]: !!! 
UNABLE to load uWSGI plugin: ./python_plugin.so: cannot open shared object 
file: No such file or directory !!!
  Apr 23 22:42:46 devstack.localdomain devstack@keystone.service[7242]: *** 
Starting uWSGI 2.0.15 (64bit) on [Sun Apr 23 22:42:46 2017] ***
  Apr 23 22:42:46 devstack.localdomain systemd[1]: devstack@keystone.service: 
main process exited, code=exited, status=1/FAILURE
  Apr 23 22:42:46 devstack.localdomain systemd[1]: Failed to start Devstack 
devstack@keystone.service.
  Apr 23 22:42:46 devstack.localdomain systemd[1]: Unit 
devstack@keystone.service entered failed state.
  Apr 23 22:42:46 devstack.localdomain systemd[1]: devstack@keystone.service 
failed.

  So i think this is a bug. I tried to create the /var/run/uwsgi directory and 
chown it stack user. Then it works.
  sudo mkdir -p /var/run/uwsgi
  sudo chown stack:stack uwsgi

  So what do you think that?

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1692767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp