[Yahoo-eng-team] [Bug 1815871] [NEW] neutron-server api don't shutdown gracefully

2019-02-14 Thread Jesse
Public bug reported:

When stop neutron-server, api worker will shutdown immediately no matter there 
are ongoing requests.
And the ongoing requests will abort immediately.

After testing, go through codes, compare with nova and cinder codes. 
The reason is that the stop and wait function in WorkerService in 
neutron/wsgi.py have issue.

def wait(self):
if isinstance(self._server, eventlet.greenthread.GreenThread):
self._server.wait()

def stop(self):
if isinstance(self._server, eventlet.greenthread.GreenThread):
self._server.kill()
self._server = None

Check the neutron codes above.
After kill in stop function, self._server is forced to set to None, which makes 
nothing to do in wait function. This leads to api worker shutdown immediately 
without wait.

Nova has the correct logic, check: 
https://github.com/openstack/nova/blob/master/nova/wsgi.py#L197
Cinder use the oslo_service.wsgi, which has the same codes like nova.

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815871

Title:
  neutron-server api don't shutdown gracefully

Status in neutron:
  In Progress

Bug description:
  When stop neutron-server, api worker will shutdown immediately no matter 
there are ongoing requests.
  And the ongoing requests will abort immediately.

  After testing, go through codes, compare with nova and cinder codes. 
  The reason is that the stop and wait function in WorkerService in 
neutron/wsgi.py have issue.

  def wait(self):
  if isinstance(self._server, eventlet.greenthread.GreenThread):
  self._server.wait()

  def stop(self):
  if isinstance(self._server, eventlet.greenthread.GreenThread):
  self._server.kill()
  self._server = None

  Check the neutron codes above.
  After kill in stop function, self._server is forced to set to None, which 
makes nothing to do in wait function. This leads to api worker shutdown 
immediately without wait.

  Nova has the correct logic, check: 
https://github.com/openstack/nova/blob/master/nova/wsgi.py#L197
  Cinder use the oslo_service.wsgi, which has the same codes like nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793389] Re: Upgrade to Ocata: Keystone Intermittent Missing 'options' Key

2018-10-25 Thread Jesse Pretorius
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1793389

Title:
  Upgrade to Ocata: Keystone Intermittent Missing 'options' Key

Status in OpenStack Identity (keystone):
  New
Status in openstack-ansible:
  Fix Released

Bug description:
  During upgrades of Newton-EOL AIOs to Ocata, Keystone installation
  fails at the "Ensure service tenant" play of the os-keystone_install.

  This occurs using the provided run-upgrade.sh script.

  Keystone logs are thus:

  INFO keystone.common.wsgi [req-11844ac2-f2d5-46b6-986d-05019432f264 - - - - 
-] HEAD http://aio1-keystone-container-14a3e1ad:5000/
  DEBUG keystone.middleware.auth [req-6523488f-be1a-4ba7-a264-6b6b8ca4c936 - - 
- - -] There is either no auth token in the request or the certificate issuer 
is not trusted. No auth context will be set. fill_context 
/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/middleware/auth.py:188
  INFO keystone.common.wsgi [req-6523488f-be1a-4ba7-a264-6b6b8ca4c936 - - - - 
-] POST http://172.29.236.66:35357/v3/auth/tokens
  ERROR keystone.common.wsgi [req-6523488f-be1a-4ba7-a264-6b6b8ca4c936 - - - - 
-] 'options'
  ERROR keystone.common.wsgi Traceback (most recent call last):
  ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 228, in __call__
  ERROR keystone.common.wsgi result = method(req, **params)
  ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/auth/controllers.py",
 line 132, in authenticate_for_token
  ERROR keystone.common.wsgi auth_context['user_id'], method_names_set):
  ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-15.1.24/lib/python2.7/site-packages/keystone/auth/core.py",
 line 377, in check_auth_methods_against_rules
  ERROR keystone.common.wsgi mfa_rules = 
user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
  ERROR keystone.common.wsgi KeyError: 'options'

  It appears that the sql identity backend ensures an 'options' key
  should exist with .../keystone/identity/backends/sql_schema.py:225,
  but obviously that code's not being hit.

  It should be noted that rerunning the install process causes it to be
  successful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1793389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670419] Re: placement_database config option help is wrong

2018-02-07 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1670419

Title:
  placement_database config option help is wrong

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in openstack-ansible:
  In Progress

Bug description:
  The help on the [placement_database] config options is wrong, it
  mentions Ocata 14.0.0 but 14.0.0 is actually Newton, Ocata was 15.0.0:

  "# The *Placement API Database* is a separate database which is used for the 
new
  # placement-api service. In Ocata release (14.0.0) this database is optional:"

  It also has some scary words about configuring it with a separate
  database so you don't have to deal with data migration issues later to
  migrate data from the nova_api database to a separate placement
  database, but the placement_database options are not actually used in
  code. They will be when this blueprint is complete:

  https://blueprints.launchpad.net/nova/+spec/optional-placement-
  database

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1670419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720322] [NEW] Unnecessary notification when notify_sg_on_port_change

2017-09-29 Thread Jesse
Public bug reported:

In this patch: https://review.openstack.org/#/c/435601
It added notify_sg_on_port_change when any port attributes update. This will 
cause unnecessary notification to neutron-l2-agents when port update, for 
example like port's name update.
notify_sg_on_port_change will only needed when port's Security Group update or 
port IP change, and this has done in codes: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1405

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: In Progress

** Summary changed:

-  Port updating wrongly notify_security_groups_member_updated
+ Unnecessary notification when notify_sg_on_port_change

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1720322

Title:
  Unnecessary notification when notify_sg_on_port_change

Status in neutron:
  In Progress

Bug description:
  In this patch: https://review.openstack.org/#/c/435601
  It added notify_sg_on_port_change when any port attributes update. This will 
cause unnecessary notification to neutron-l2-agents when port update, for 
example like port's name update.
  notify_sg_on_port_change will only needed when port's Security Group update 
or port IP change, and this has done in codes: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1405

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1720322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719769] [NEW] Occasional network interruption with mark=1 in conntrack

2017-09-26 Thread Jesse
Public bug reported:

If VM port's security group rules update frequently and network traffic is 
heavy.
There will be situation that OvS security group flows wrongly mark the 
conntrack to 1 and block the VM network connectivity.

If there are 2 VMs, VM A(192.168.111.234) and VM B(192.168.111.233), B allow 
ping from A.
We ping B from A forever.
There will be one conntrack rule in VM B's compute Host.
icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=0 zone=1 
use=2

I try to simulate this issue because it's hard to reproduce this issue in 
normal way.
There is one precondition to notice:
If SG rules change on a port, SG flows on this port will be recreated.
Although all SG flows for this port will be added into OvS flows by
command 'ovs-ofctl add-flows' one-off, but flows will actually be
added into OvS flows one by one.

It's hard to reproduce this issue if we do not hack the codes. 
So I disable security group defer in codes to simulate. (change codes here: 
https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py#L132)
 

Then I start neutron-openvswitch-agent with breakpoint on
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/openvswitch_firewall/firewall.py#L1004

Now we will get mark=1 conntrack rule in VM B's compute Host:
icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=1 zone=1 
use=1

Here after the port's security group rules flows added later, this
mark=1 conntrack rule will not deleted only if timeout for this rule.

In our OpenStack production environment, we encounter this issue and our vital 
system network disconnected.
The reason is that the VM port security rule change frequently and VM network 
traffic is heavy.

** Affects: neutron
     Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719769

Title:
  Occasional network interruption with mark=1 in conntrack

Status in neutron:
  In Progress

Bug description:
  If VM port's security group rules update frequently and network traffic is 
heavy.
  There will be situation that OvS security group flows wrongly mark the 
conntrack to 1 and block the VM network connectivity.

  If there are 2 VMs, VM A(192.168.111.234) and VM B(192.168.111.233), B allow 
ping from A.
  We ping B from A forever.
  There will be one conntrack rule in VM B's compute Host.
  icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=0 zone=1 
use=2

  I try to simulate this issue because it's hard to reproduce this issue in 
normal way.
  There is one precondition to notice:
  If SG rules change on a port, SG flows on this port will be recreated.
  Although all SG flows for this port will be added into OvS flows by
  command 'ovs-ofctl add-flows' one-off, but flows will actually be
  added into OvS flows one by one.

  It's hard to reproduce this issue if we do not hack the codes. 
  So I disable security group defer in codes to simulate. (change codes here: 
https://github.com/openstack/neutron/blob/master/neutron/agent/securitygroups_rpc.py#L132)
 

  Then I start neutron-openvswitch-agent with breakpoint on
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/openvswitch_firewall/firewall.py#L1004

  Now we will get mark=1 conntrack rule in VM B's compute Host:
  icmp 1 29 src=192.168.111.234 dst=192.168.111.233 type=8 code=0 id=29697 
src=192.168.111.233 dst=192.168.111.234 type=0 code=0 id=29697 mark=1 zone=1 
use=1

  Here after the port's security group rules flows added later, this
  mark=1 conntrack rule will not deleted only if timeout for this rule.

  In our OpenStack production environment, we encounter this issue and our 
vital system network disconnected.
  The reason is that the VM port security rule change frequently and VM network 
traffic is heavy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718356] Re: Include default config files in python wheel

2017-09-21 Thread Jesse Pretorius
@Matt I'll be patching both neutron and glance to include more files or
to optimise the implementation. I will be adding more projects as I go
through them - I ended up getting pulled into something else yesterday
before completing this.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: keystone
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: barbican
   Importance: Undecided
   Status: New

** Changed in: barbican
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: ironic
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: magnum
   Importance: Undecided
   Status: New

** Also affects: sahara
   Importance: Undecided
   Status: New

** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: magnum
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: trove
     Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: sahara
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718356

Title:
  Include default config files in python wheel

Status in Barbican:
  New
Status in Cinder:
  New
Status in Designate:
  New
Status in Glance:
  New
Status in OpenStack Heat:
  New
Status in Ironic:
  New
Status in OpenStack Identity (keystone):
  New
Status in Magnum:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in octavia:
  New
Status in openstack-ansible:
  New
Status in Sahara:
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  The projects which deploy OpenStack from source or using python wheels
  currently have to either carry templates for api-paste, policy and
  rootwrap files or need to source them from git during deployment. This
  results in some rather complex mechanisms which could be radically
  simplified by simply ensuring that all the same files are included in
  the built wheel.

  A precedence for this has already been set in neutron [1] and glance
  [2] through the use of the data_files option in the files section of
  setup.cfg.

  [1] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
  [2] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21

  This bug will be used for a cross-project implementation of patches to
  normalise the implementation across the OpenStack projects. Hopefully
  the result will be a consistent implementation across all the major
  projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1718356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718356] [NEW] Include default config files in python wheel

2017-09-20 Thread Jesse Pretorius
Public bug reported:

The projects which deploy OpenStack from source or using python wheels
currently have to either carry templates for api-paste, policy and
rootwrap files or need to source them from git during deployment. This
results in some rather complex mechanisms which could be radically
simplified by simply ensuring that all the same files are included in
the built wheel.

A precedence for this has already been set in neutron [1] and glance [2]
through the use of the data_files option in the files section of
setup.cfg.

[1] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
[2] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21

This bug will be used for a cross-project implementation of patches to
normalise the implementation across the OpenStack projects. Hopefully
the result will be a consistent implementation across all the major
projects.

** Affects: glance
 Importance: Undecided
 Assignee: Jesse Pretorius (jesse-pretorius)
 Status: New

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse Pretorius (jesse-pretorius)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Jesse Pretorius (jesse-pretorius)
 Status: New

** Affects: openstack-ansible
 Importance: Undecided
 Assignee: Jesse Pretorius (jesse-pretorius)
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: glance
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718356

Title:
  Include default config files in python wheel

Status in Glance:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in openstack-ansible:
  New

Bug description:
  The projects which deploy OpenStack from source or using python wheels
  currently have to either carry templates for api-paste, policy and
  rootwrap files or need to source them from git during deployment. This
  results in some rather complex mechanisms which could be radically
  simplified by simply ensuring that all the same files are included in
  the built wheel.

  A precedence for this has already been set in neutron [1] and glance
  [2] through the use of the data_files option in the files section of
  setup.cfg.

  [1] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
  [2] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21

  This bug will be used for a cross-project implementation of patches to
  normalise the implementation across the OpenStack projects. Hopefully
  the result will be a consistent implementation across all the major
  projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1718356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682805] Re: transient switching loop caused by neutron-openvswitch-agent

2017-07-28 Thread Jesse
After I recheck this issue, I find that transient switching loop may not 
exist...
The fail_mode of br-int, br-eth0 and br-ex are secure, which means that when 
node reboot or OpenvSwitch restart, there will no normal flow in these bridges 
so no packets can pass these bridges.
The normal flow in br-int will not make switching loop because there is no 
normal flow in br-eth0 and br-ex. Normal flow and drop flow are added in 
br-eth0 then br-ex. It seems transient switching loop can not happen.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1682805

Title:
  transient switching loop caused by neutron-openvswitch-agent

Status in neutron:
  Invalid

Bug description:
  If we have the topology bellow in network node.

  https://etherpad.openstack.org/p/neutron_transient_switching_loop

  The ports on switch connected to eth0 and eth1 set to trunk all VLANs.
  When neutron-openvswitch-agent restart, First it will set br-int bridge by 
self.setup_integration_br(), then set br-eth0 and br-ex by 
self.setup_physical_bridges(self.bridge_mappings).

  Before this bug (https://bugs.launchpad.net/neutron/+bug/1383674), all flows 
in br-int will clear when neutron-openvswitch-agent restart, this will cause 
the transient switching loop decribed bellow.
  After the bug above fixed, the flows in br-int will remain to keep the 
network connected if neutron-openvswitch-agent restart, but if the network node 
reboot, the transient switching loop will also happen as decribed bellow.

  In self.setup_integration_br(), A normal flow in table 0 will be added in 
br-int flow.
  In the self.setup_physical_bridges(self.bridge_mappings), Drop flow for 
packet coming from int-br-eth0 and int-br-ex will be added in br-int flow.
  This drop flows will cut the switching loop from switch to br-int.
  But before the drop flows added to br-int, If there is a broadcast packet 
coming from switch, the packet will loop bewtween switch and br-int.

  We should add normal flow in table 0 in br-int after the drop flows
  added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1682805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699082] Re: update port's allowed-address-pair slow

2017-06-20 Thread Jesse
@Brian Thanks, I think this can fix my issue. :)

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699082

Title:
  update port's allowed-address-pair slow

Status in neutron:
  Fix Released

Bug description:
  Updating port with allowed-address-pair become very slower with 
allowed-address-pair number increase.
  1 allowed-address-pair:
  # time neutron port-update --allowed-address-pair 
ip_address=10.0.0.18,mac_address=fa:16:3e:a7:bb:cc 
d20c8588-310d-409d-8b2b-96dd36c984b0
  Updated port: d20c8588-310d-409d-8b2b-96dd36c984b0

  real  0m1.995s
  user  0m1.261s
  sys   0m0.083s

  10 allowed-address-pairs:
  # time neutron port-update --allowed-address-pair 
ip_address=10.0.0.18,mac_address=fa:16:3e:a7:bb:cc --allowed-address-pair 
ip_address=10.0.0.19,mac_address=fa:16:3e:a7:bb:dd  --allowed-address-pair 
ip_address=10.0.0.20,mac_address=fa:16:3e:13:1f:57  --allowed-address-pair 
ip_address=10.0.0.48,mac_address=fa:16:3e:03:0d:2d --allowed-address-pair 
ip_address=10.0.0.91,mac_address=fa:16:3e:18:ea:a4 --allowed-address-pair 
ip_address=10.0.0.75,mac_address=fa:16:3e:2a:d1:b5 --allowed-address-pair 
ip_address=10.0.0.44,mac_address=fa:16:3e:32:7f:0e --allowed-address-pair 
ip_address=10.0.0.46,mac_address=fa:16:3e:48:55:1d --allowed-address-pair 
ip_address=10.0.0.12,mac_address=fa:16:3e:4d:ab:bb --allowed-address-pair 
ip_address=10.0.0.13,mac_address=fa:16:3e:5c:67:3f 
d20c8588-310d-409d-8b2b-96dd36c984b0
  Updated port: d20c8588-310d-409d-8b2b-96dd36c984b0

  real  0m8.315s
  user  0m1.554s
  sys   0m0.185s

  The reason is that the l3_dvrscheduler_db.subscribe() in l3_router_plunin.py. 
This make the l3_dvrscheduler_db subscribe the port_update event.
  Even though I don't use DVR, this notification do nothing but many checks in 
this notification handler waste the time.

  The large allowed-address-pair requirement is used in Kubernetes
  integration with Neutron when container use macvlan mode in VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699082] [NEW] update port's allowed-address-pair slow

2017-06-20 Thread Jesse
Public bug reported:

Updating port with allowed-address-pair become very slower with 
allowed-address-pair number increase.
1 allowed-address-pair:
# time neutron port-update --allowed-address-pair 
ip_address=10.0.0.18,mac_address=fa:16:3e:a7:bb:cc 
d20c8588-310d-409d-8b2b-96dd36c984b0
Updated port: d20c8588-310d-409d-8b2b-96dd36c984b0

real0m1.995s
user0m1.261s
sys 0m0.083s

10 allowed-address-pairs:
# time neutron port-update --allowed-address-pair 
ip_address=10.0.0.18,mac_address=fa:16:3e:a7:bb:cc --allowed-address-pair 
ip_address=10.0.0.19,mac_address=fa:16:3e:a7:bb:dd  --allowed-address-pair 
ip_address=10.0.0.20,mac_address=fa:16:3e:13:1f:57  --allowed-address-pair 
ip_address=10.0.0.48,mac_address=fa:16:3e:03:0d:2d --allowed-address-pair 
ip_address=10.0.0.91,mac_address=fa:16:3e:18:ea:a4 --allowed-address-pair 
ip_address=10.0.0.75,mac_address=fa:16:3e:2a:d1:b5 --allowed-address-pair 
ip_address=10.0.0.44,mac_address=fa:16:3e:32:7f:0e --allowed-address-pair 
ip_address=10.0.0.46,mac_address=fa:16:3e:48:55:1d --allowed-address-pair 
ip_address=10.0.0.12,mac_address=fa:16:3e:4d:ab:bb --allowed-address-pair 
ip_address=10.0.0.13,mac_address=fa:16:3e:5c:67:3f 
d20c8588-310d-409d-8b2b-96dd36c984b0
Updated port: d20c8588-310d-409d-8b2b-96dd36c984b0

real0m8.315s
user0m1.554s
sys 0m0.185s

The reason is that the l3_dvrscheduler_db.subscribe() in l3_router_plunin.py. 
This make the l3_dvrscheduler_db subscribe the port_update event.
Even though I don't use DVR, this notification do nothing but many checks in 
this notification handler waste the time.

The large allowed-address-pair requirement is used in Kubernetes
integration with Neutron when container use macvlan mode in VM.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699082

Title:
  update port's allowed-address-pair slow

Status in neutron:
  New

Bug description:
  Updating port with allowed-address-pair become very slower with 
allowed-address-pair number increase.
  1 allowed-address-pair:
  # time neutron port-update --allowed-address-pair 
ip_address=10.0.0.18,mac_address=fa:16:3e:a7:bb:cc 
d20c8588-310d-409d-8b2b-96dd36c984b0
  Updated port: d20c8588-310d-409d-8b2b-96dd36c984b0

  real  0m1.995s
  user  0m1.261s
  sys   0m0.083s

  10 allowed-address-pairs:
  # time neutron port-update --allowed-address-pair 
ip_address=10.0.0.18,mac_address=fa:16:3e:a7:bb:cc --allowed-address-pair 
ip_address=10.0.0.19,mac_address=fa:16:3e:a7:bb:dd  --allowed-address-pair 
ip_address=10.0.0.20,mac_address=fa:16:3e:13:1f:57  --allowed-address-pair 
ip_address=10.0.0.48,mac_address=fa:16:3e:03:0d:2d --allowed-address-pair 
ip_address=10.0.0.91,mac_address=fa:16:3e:18:ea:a4 --allowed-address-pair 
ip_address=10.0.0.75,mac_address=fa:16:3e:2a:d1:b5 --allowed-address-pair 
ip_address=10.0.0.44,mac_address=fa:16:3e:32:7f:0e --allowed-address-pair 
ip_address=10.0.0.46,mac_address=fa:16:3e:48:55:1d --allowed-address-pair 
ip_address=10.0.0.12,mac_address=fa:16:3e:4d:ab:bb --allowed-address-pair 
ip_address=10.0.0.13,mac_address=fa:16:3e:5c:67:3f 
d20c8588-310d-409d-8b2b-96dd36c984b0
  Updated port: d20c8588-310d-409d-8b2b-96dd36c984b0

  real  0m8.315s
  user  0m1.554s
  sys   0m0.185s

  The reason is that the l3_dvrscheduler_db.subscribe() in l3_router_plunin.py. 
This make the l3_dvrscheduler_db subscribe the port_update event.
  Even though I don't use DVR, this notification do nothing but many checks in 
this notification handler waste the time.

  The large allowed-address-pair requirement is used in Kubernetes
  integration with Neutron when container use macvlan mode in VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698900] [NEW] DB check appears to not be working right

2017-06-19 Thread Jesse Pretorius
Public bug reported:

Using current master of keystone, executing a keystone-manage db_sync
--check seems to always return the RC of 2 regardless of the steps
previously confirmed. This happens until the contract is done, then it
returns 0.

Steps to reproduce:

root@keystone1:/# mysqladmin drop keystone; mysql create keystone

root@keystone1:/# keystone-manage db_sync --check; echo $?
2

root@keystone1:/# keystone-manage db_sync --expand; echo $?
0

root@keystone1:/# keystone-manage db_sync --check; echo $?
2

root@keystone1:/# keystone-manage db_sync --migrate; echo $?
0

root@keystone1:/# keystone-manage db_sync --check; echo $?
2

root@keystone1:/# keystone-manage db_sync --contract; echo $?
0

root@keystone1:/# keystone-manage db_sync --check; echo $?
0

Not getting the right return codes or advise from the check can spell
disaster for automation that uses it, or humans following the documented
migration process.

** Affects: keystone
 Importance: High
 Status: Confirmed


** Tags: sql

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1698900

Title:
  DB check appears to not be working right

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  Using current master of keystone, executing a keystone-manage db_sync
  --check seems to always return the RC of 2 regardless of the steps
  previously confirmed. This happens until the contract is done, then it
  returns 0.

  Steps to reproduce:

  root@keystone1:/# mysqladmin drop keystone; mysql create keystone

  root@keystone1:/# keystone-manage db_sync --check; echo $?
  2

  root@keystone1:/# keystone-manage db_sync --expand; echo $?
  0

  root@keystone1:/# keystone-manage db_sync --check; echo $?
  2

  root@keystone1:/# keystone-manage db_sync --migrate; echo $?
  0

  root@keystone1:/# keystone-manage db_sync --check; echo $?
  2

  root@keystone1:/# keystone-manage db_sync --contract; echo $?
  0

  root@keystone1:/# keystone-manage db_sync --check; echo $?
  0

  Not getting the right return codes or advise from the check can spell
  disaster for automation that uses it, or humans following the
  documented migration process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1698900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697593] [NEW] ovsfw issue for allowed_address_pairs

2017-06-12 Thread Jesse
Public bug reported:

port's allowed_address_pairs allow different IP and MAC set for port.

The current ovsfw implementation has this issue for allowed_address_pairs with 
different MAC with VM's MAC:
1. Packets with allowed_address_pairs' MAC and IP (different MAC with VM's MAC) 
cannot come out from VM because the table=72 OpenFlow only check dl_src=VM-MAC 
in br-int.
2. Cannot ping from outside to VM's allowed_address_pairs' MAC and IP 
(different MAC with VM's MAC)  because the table=0 OpenFlow only check 
dl_dst=VM-MAC.

We need to allow the situation that address_pairs with different MAC
with VM's MAC.

Suggest change:
1. Do not check dl_src in table=72 because table=72 has checked dl_src for 
Egress
2. Add all allowed MACs in table=0 and table=73 for Ingress
3. Check dl_dst and nw_dst in table=81 like table=71 do
4. Do not check dl_dst in table=82 because this check has done in table=0 and 
table=73

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697593

Title:
  ovsfw issue for allowed_address_pairs

Status in neutron:
  New

Bug description:
  port's allowed_address_pairs allow different IP and MAC set for port.

  The current ovsfw implementation has this issue for allowed_address_pairs 
with different MAC with VM's MAC:
  1. Packets with allowed_address_pairs' MAC and IP (different MAC with VM's 
MAC) cannot come out from VM because the table=72 OpenFlow only check 
dl_src=VM-MAC in br-int.
  2. Cannot ping from outside to VM's allowed_address_pairs' MAC and IP 
(different MAC with VM's MAC)  because the table=0 OpenFlow only check 
dl_dst=VM-MAC.

  We need to allow the situation that address_pairs with different MAC
  with VM's MAC.

  Suggest change:
  1. Do not check dl_src in table=72 because table=72 has checked dl_src for 
Egress
  2. Add all allowed MACs in table=0 and table=73 for Ingress
  3. Check dl_dst and nw_dst in table=81 like table=71 do
  4. Do not check dl_dst in table=82 because this check has done in table=0 and 
table=73

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690756] [NEW] cache 'backend' argument description is ambiguous

2017-05-15 Thread Jesse Pretorius
Public bug reported:

The oslo.cache backend argument description currently states:

"Dogpile.cache backend module. It is recommended that Memcache or Redis
(dogpile.cache.redis) be used in production deployments. For eventlet-
based or highly threaded servers, Memcache with pooling
(oslo_cache.memcache_pool) is recommended. For low thread servers,
dogpile.cache.memcached is recommended. Test environments with a single
instance of the server can use the dogpile.cache.memory backend."

So the dogpile.cache.memcached/dogpile.cache.redis backends should be
used for production deployments, but the dogpile cache is recommended
for low thread servers and the oslo_cache.memcache_pool should be used
for high thread servers. I don't understand what the actual
recommendation is here.

For a production deployment of a service using uwsgi and a web server, what is 
the recommendation?
For a production deployment of a service using uwsgi and no web server, what is 
the recommendation?
For a production deployment of a service using eventlet, what is the 
recommendation?

Using keystone as an example, the example config file has the same
content which does not really help to clarify anything:

https://github.com/openstack/keystone/blob/b7bd6e301964d393ac6835111a08bbf15ba73bc0/etc/keystone.conf.sample#L514-L520

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: oslo.cache
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1690756

Title:
  cache 'backend' argument description is ambiguous

Status in OpenStack Identity (keystone):
  New
Status in oslo.cache:
  New

Bug description:
  The oslo.cache backend argument description currently states:

  "Dogpile.cache backend module. It is recommended that Memcache or
  Redis (dogpile.cache.redis) be used in production deployments. For
  eventlet-based or highly threaded servers, Memcache with pooling
  (oslo_cache.memcache_pool) is recommended. For low thread servers,
  dogpile.cache.memcached is recommended. Test environments with a
  single instance of the server can use the dogpile.cache.memory
  backend."

  So the dogpile.cache.memcached/dogpile.cache.redis backends should be
  used for production deployments, but the dogpile cache is recommended
  for low thread servers and the oslo_cache.memcache_pool should be used
  for high thread servers. I don't understand what the actual
  recommendation is here.

  For a production deployment of a service using uwsgi and a web server, what 
is the recommendation?
  For a production deployment of a service using uwsgi and no web server, what 
is the recommendation?
  For a production deployment of a service using eventlet, what is the 
recommendation?

  Using keystone as an example, the example config file has the same
  content which does not really help to clarify anything:

  
https://github.com/openstack/keystone/blob/b7bd6e301964d393ac6835111a08bbf15ba73bc0/etc/keystone.conf.sample#L514-L520

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1690756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682805] [NEW] transient switching loop caused by neutron-openvswitch-agent

2017-04-14 Thread Jesse
Public bug reported:

If we have the topology bellow in network node.

https://etherpad.openstack.org/p/neutron_transient_switching_loop

The ports on switch connected to eth0 and eth1 set to trunk all VLANs.
When neutron-openvswitch-agent restart, First it will set br-int bridge by 
self.setup_integration_br(), then set br-eth0 and br-ex by 
self.setup_physical_bridges(self.bridge_mappings).

Before this bug (https://bugs.launchpad.net/neutron/+bug/1383674), all flows in 
br-int will clear when neutron-openvswitch-agent restart, this will cause the 
transient switching loop decribed bellow.
After the bug above fixed, the flows in br-int will remain to keep the network 
connected if neutron-openvswitch-agent restart, but if the network node reboot, 
the transient switching loop will also happen as decribed bellow.

In self.setup_integration_br(), A normal flow in table 0 will be added in 
br-int flow.
In the self.setup_physical_bridges(self.bridge_mappings), Drop flow for packet 
coming from int-br-eth0 and int-br-ex will be added in br-int flow.
This drop flows will cut the switching loop from switch to br-int.
But before the drop flows added to br-int, If there is a broadcast packet 
coming from switch, the packet will loop bewtween switch and br-int.

We should add normal flow in table 0 in br-int after the drop flows
added.

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: New

** Description changed:

  If we have the topology bellow in network node.
  
-+---+
-|   |
-| switch|
-+-+-+---+
-  | |
-++
-| | ||
-|++-+  ++-+  |
-||   eth0   |  |   eth1   |  |
-| +--+--+--++--+--+--+   |
-| ||||   |
-| | br-eth0|| br-ex  |   |
-| +---+++---++   |
-| | ||
-| | ||
-|   +-+-+---+|
-|   |   ||
-|   | br-int||
-|   +---+|
-||
-||
-|network node|
-++
++---+
+|   |
+| switch|
++-+-+---+
+  | |
+++
+| | ||
+|++-+  ++-+  |
+||   eth0   |  |   eth1   |  |
+| +--+--+--++--+--+--+   |
+| ||||   |
+| | br-eth0|| br-ex  |   |
+| +---+++---++   |
+| | ||
+| | ||
+|   +-+-+---+|
+|   |   ||
+|   | br-int||
+|   +---+|
+||
+||
+|network node|
+++
  The ports on switch connected to eth0 and eth1 set to trunk all VLANs.
  When neutron

[Yahoo-eng-team] [Bug 1663465] [NEW] [performance improvement] update neutron-openvswitch-agent's AsyncProcess

2017-02-09 Thread Jesse
Public bug reported:

neutron-openvswitch-agent' rpc_loop loops every 2 seconds by default.
In every loop, the function invocation will do every time: _agent_has_updates 
-> polling_manager.is_polling_required -> self._monitor.has_updates -> 
self.is_active() -> self.pid -> utils.get_root_helper_child_pid -> 
find_child_pids -> execute 'ps --ppid  -o pid=' command.
The command ('ps --ppid  -o pid=') execution is heavy especially for high 
load server, I have 800 HA vRouters in my network node. Every time I use top 
command to see server load, I will always find ps process with high CPU usage, 
and the every rpc_loop will takes 8+ seconds according to the 
neutron-openvswitch-agent log.
So we need to find a way to avoid this invocation.

In class AsyncProcess, we can get the pid when spawn the child process
to avoid getting pid every time using ps command.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663465

Title:
  [performance improvement] update neutron-openvswitch-agent's
  AsyncProcess

Status in neutron:
  New

Bug description:
  neutron-openvswitch-agent' rpc_loop loops every 2 seconds by default.
  In every loop, the function invocation will do every time: _agent_has_updates 
-> polling_manager.is_polling_required -> self._monitor.has_updates -> 
self.is_active() -> self.pid -> utils.get_root_helper_child_pid -> 
find_child_pids -> execute 'ps --ppid  -o pid=' command.
  The command ('ps --ppid  -o pid=') execution is heavy especially for 
high load server, I have 800 HA vRouters in my network node. Every time I use 
top command to see server load, I will always find ps process with high CPU 
usage, and the every rpc_loop will takes 8+ seconds according to the 
neutron-openvswitch-agent log.
  So we need to find a way to avoid this invocation.

  In class AsyncProcess, we can get the pid when spawn the child process
  to avoid getting pid every time using ps command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1663465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2017-01-15 Thread Jesse
** No longer affects: magnum

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in Karbor:
  Fix Released
Status in kolla:
  Fix Released
Status in kuryr:
  Fix Released
Status in kuryr-libnetwork:
  Fix Released
Status in Mistral:
  Fix Released
Status in networking-calico:
  In Progress
Status in networking-midonet:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in osprofiler:
  Fix Released
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  Won't Fix
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651420] [NEW] Can not clear source or dest port (range) for existing firewall rule

2016-12-20 Thread Jesse
Public bug reported:

We need to give user a way to update firewall rule to clear source or
dest port (range).

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jesse (jesse-5)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1651420

Title:
  Can not clear source or dest port  (range)  for existing firewall rule

Status in neutron:
  New

Bug description:
  We need to give user a way to update firewall rule to clear source or
  dest port (range).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1651420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1650486] [NEW] cloud-init issue when vm's vnic has no security group

2016-12-16 Thread Jesse
Public bug reported:

VM will fail to get metadata with cloud-init timeout error if the VM's vnic 
does not have security group.
We can allow VM access 169.254.169.254 whether VM has security group or not.

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jesse (jesse-5)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1650486

Title:
  cloud-init issue when vm's vnic has no security group

Status in neutron:
  New

Bug description:
  VM will fail to get metadata with cloud-init timeout error if the VM's vnic 
does not have security group.
  We can allow VM access 169.254.169.254 whether VM has security group or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1650486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1650466] [NEW] Remove iptables nat and mangle rules for security group

2016-12-15 Thread Jesse
Public bug reported:

It seems there is no need to add iptables nat and mangle rules for
security group, these rules will slow down network performance
especially when using 6wind Virtual Accelarator.

** Affects: neutron
 Importance: Undecided
 Assignee: Jesse (jesse-5)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1650466

Title:
  Remove iptables nat and mangle rules for security group

Status in neutron:
  New

Bug description:
  It seems there is no need to add iptables nat and mangle rules for
  security group, these rules will slow down network performance
  especially when using 6wind Virtual Accelarator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1650466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643684] [NEW] _check_requested_image uses incorrect bdm info to validate target volume when booting from boot from volume instance snapshot image

2016-11-21 Thread Jesse Keating
Public bug reported:

This is a complicated one.

Start with a boot from volume instance. Instance uses a 1G volume, and a
tiny image. Make a snapshot of this, which results in a volume snapshot
plus a image in glance. Image details:

# openstack image show a9a2a4af-0383-47a0-85e6-c3211f11459e
+--+---+
| Field| Value |
+--+---+
| checksum | d41d8cd98f00b204e9800998ecf8427e  |
| container_format | bare  |
| created_at   | 2016-11-21T18:13:23Z  |
| disk_format  | qcow2 |
| file | /v2/images/a9a2a4af-0383-47a0-85e6-c3211f11459e/file  |
| id   | a9a2a4af-0383-47a0-85e6-c3211f11459e  |
| min_disk | 10|
| min_ram  | 0 |
| name | testsnap  |
| owner| 6fe9193745c64a44ab30d6bd1c5cb8bb  |
| properties   | base_image_ref='', bdm_v2='True', |
|  | block_device_mapping='[{"guest_format": null, |
|  | "boot_index": 0, "delete_on_termination": false,  |
|  | "no_device": null, "snapshot_id": "5186f403-bf91-49d4 |
|  | -b3db-c273e087fcfa", "device_name": "/dev/vda",   |
|  | "disk_bus": "virtio", "image_id": null, "source_type":|
|  | "snapshot", "tag": null, "device_type": "disk",   |
|  | "volume_id": null, "destination_type": "volume",  |
|  | "volume_size": 1}]', owner_specified.shade.md5='133eae9fb |
|  | 1c98f45894a4e60d8736619', |
|  | owner_specified.shade.object='images/cirros', owner_speci |
|  | fied.shade.sha256='f11286e2bd317ee1a1d0469a6b182b33bda4af |
|  | 6f35ba224ca49d75752c81e20a', root_device_name='/dev/vda'  |
| protected| False |
| schema   | /v2/schemas/image |
| size | 0 |
| status   | active|
| tags |   |
| updated_at   | 2016-11-21T18:13:24Z  |
| virtual_size | None  |
| visibility   | private   |
+--+---+

First, try booting just using the image:

# openstack server create --image a9a2a4af-0383-47a0-85e6-c3211f11459e --flavor 
2 --nic net-id=832c3099-a589-4f05-8f70-c5af5b2f852b lolwhut
Volume is smaller than the minimum size specified in image metadata. Volume 
size is 1073741824 bytes, minimum size is 10737418240 bytes. (HTTP 400) 
(Request-ID: req-97cc0a55-7d5b-4e14-8f4b-e8a501f96f11)

Nova is saying that the minimum size is 10G, but the requested bdm size
is 1. I'm assuming that's coming from the instance data's
block_device_mapping key, which has volume_size of 1.

Now, try doing this where you're also requesting to boot from volume, of
size 15 (this was done via horizon):

2016-11-21 19:50:28.398 12516 DEBUG nova.api.openstack.wsgi [req-
4481446f-e026-4e83-b07a-1acdfa08194f 4921001dd4944f1396f7e6d64717f044
6fe9193745c64a44ab30d6bd1c5cb8bb - default default] Action: 'create',
calling method: >, body: {"server": {"name": "jlkwhat", "imageRef": "",
"availability_zone": "ci", "key_name": "turtle-key", "flavorRef": "2",
"OS-DCF:diskConfig": "AUTO", "max_count": 1, "block_device_mapping_v2":
[{"boot_index": "0", "uuid": "a9a2a4af-0383-47a0-85e6-c3211f11459e",
"volume_size": 15, "device_name": "vda", "source_type": "image",
"destination_type": "volume", "delete_on_termination": false}],
"min_count": 1, "networks": [{"uuid":
"832c3099-a589-4f05-8f70-c5af5b2f852b"}], "security_groups": [{"name":
"f8cb8d50-698a-48b0-9a83-1201e89834e0"}]}} _process_stack
/opt/bbc/openstack-2016.2-newton/nova/local/lib/python2.7/site-
packages/nova/api/openstack/wsgi.py:633


See that it has a source_type of image, and a destination_type of volume, and 
block_device_mapping_v2 shows volume_size of 15 (but references that image UUID 
from above).

Nova STILL spits out the same warning:

2016-11-21 19:50:29.004 12516 DEBUG nova.api.openstack.wsgi [req-
4481446f-e0

[Yahoo-eng-team] [Bug 1642212] [NEW] RFE: keystone-manage db_sync --check

2016-11-16 Thread Jesse Pretorius
Public bug reported:

In the automation of deployments and upgrades it would be useful to be
able to check whether there are any database actions outstanding so that
the action can be determined and executed.

Effectively I'm thinking something along the lines of this experience:

Operator (or automation tool) executes: keystone-manage db_sync --check

The tool checks the db state and returns whether there are any
migrations to execute (ie --expand), whether there is a --migrate
outstanding, whether there is a --contract outstanding, or any
combination of the above. If there is nothing left to do, that should be
reported too.

Ideally the output should take two forms:

1 - stdout messages... obviously this is useful when executing this by hand
2 - return codes... this is very useful when executing via automation tooling

The return codes need to be actionable - ie I must know which actions
are required based on the return code with no ambiguity.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1642212

Title:
  RFE: keystone-manage db_sync --check

Status in OpenStack Identity (keystone):
  New

Bug description:
  In the automation of deployments and upgrades it would be useful to be
  able to check whether there are any database actions outstanding so
  that the action can be determined and executed.

  Effectively I'm thinking something along the lines of this experience:

  Operator (or automation tool) executes: keystone-manage db_sync
  --check

  The tool checks the db state and returns whether there are any
  migrations to execute (ie --expand), whether there is a --migrate
  outstanding, whether there is a --contract outstanding, or any
  combination of the above. If there is nothing left to do, that should
  be reported too.

  Ideally the output should take two forms:

  1 - stdout messages... obviously this is useful when executing this by hand
  2 - return codes... this is very useful when executing via automation tooling

  The return codes need to be actionable - ie I must know which actions
  are required based on the return code with no ambiguity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1642212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361235] Re: visit horizon failure because of import module failure

2016-11-13 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
   Status: New => In Progress

** Changed in: openstack-ansible
 Assignee: (unassigned) => Donovan Francesco (donovan-francesco)

** Changed in: openstack-ansible
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361235

Title:
  visit horizon failure because of import module failure

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in openstack-ansible:
  In Progress
Status in osprofiler:
  In Progress
Status in python-mistralclient:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  1. Use TripleO to deploy both undercloud, and overcloud, and enable horizon 
when building images.
  2. Visit horizon portal always failure, and has below errors in 
horizon_error.log

  [Wed Aug 20 01:45:58.441221 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] mod_wsgi (pid=5035): Exception occurred processing WSGI 
script 
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/django.wsgi'.
  [Wed Aug 20 01:45:58.441273 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] Traceback (most recent call last):
  [Wed Aug 20 01:45:58.441294 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
 line 187, in __call__
  [Wed Aug 20 01:45:58.449979 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self.load_middleware()
  [Wed Aug 20 01:45:58.45 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 44, in load_middleware
  [Wed Aug 20 01:45:58.450556 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] for middleware_path in settings.MIDDLEWARE_CLASSES:
  [Wed Aug 20 01:45:58.450576 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py",
 line 54, in __getattr__
  [Wed Aug 20 01:45:58.454248 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self._setup(name)
  [Wed Aug 20 01:45:58.454269 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py",
 line 49, in _setup
  [Wed Aug 20 01:45:58.454305 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self._wrapped = Settings(settings_module)
  [Wed Aug 20 01:45:58.454319 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init
  __.py", line 128, in __init__
  [Wed Aug 20 01:45:58.454338 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] mod = importlib.import_module(self.SETTINGS_MODULE)
  [Wed Aug 20 01:45:58.454350 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/utils/importlib.py",
 line 40, in import_module
  [Wed Aug 20 01:45:58.462806 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] __import__(name)
  [Wed Aug 20 01:45:58.462826 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py",
 line 28, in 
  [Wed Aug 20 01:45:58.467136 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from openstack_dashboard import exceptions
  [Wed Aug 20 01:45:58.467156 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/exceptions.py",
 line 22, in 
  [Wed Aug 20 01:45:58.467667 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import exceptions as keystoneclient
  [Wed Aug 20 01:45:58.467685 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/__init__.py",
 line 28, in 
  [Wed Aug 20 01:45:58.472968 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import client
  [Wed Aug 20 01:45:58.472989 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/client.py",
 line 13, in 
  [Wed Aug 20 01:45:58.473833 2014] [:error] [pid 5035:tid 3038755648] [remote

[Yahoo-eng-team] [Bug 1640319] Re: AttributeError: 'module' object has no attribute 'convert_to_boolean'

2016-11-09 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640319

Title:
  AttributeError: 'module' object has no attribute 'convert_to_boolean'

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress
Status in openstack-ansible:
  Fix Released
Status in vmware-nsx:
  New

Bug description:
  With latest neutron master code, neutron service q-svc could start due to the 
following error:
  2016-11-08 21:54:39.435 DEBUG oslo_concurrency.lockutils [-] Lock "manager" 
released by "neutron.manager._create_instance" :: held 1.467s from (pid=18534) 
inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
  2016-11-08 21:54:39.435 ERROR neutron.service [-] Unrecoverable error: please 
check log for details.
  2016-11-08 21:54:39.435 TRACE neutron.service Traceback (most recent call 
last):
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 87, in serve_wsgi
  2016-11-08 21:54:39.435 TRACE neutron.service service.start()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 63, in start
  2016-11-08 21:54:39.435 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 289, in _run_wsgi
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
config.load_paste_app(app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/common/config.py", line 125, in load_paste_app
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.load_app(app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/wsgi.py", line 353, in 
load_app
  2016-11-08 21:54:39.435 TRACE neutron.service return 
deploy.loadapp("config:%s" % self.config_path, name=name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2016-11-08 21:54:39.435 TRACE neutron.service return loadobj(APP, uri, 
name=name, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2016-11-08 21:54:39.435 TRACE neutron.service return context.create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2016-11-08 21:54:39.435 TRACE neutron.service return 
self.object_type.invoke(self)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in 
urlmap_factory
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.get_app(app_name, global_conf=global_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-11-08 21:54:39.435 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2016-11-08 21:54:39.435 TRACE neutron.service return 
self.object_type.invoke(self)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/auth.py", line 71, in pipeline_factory
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.get_app(pipeline[-1])
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-11-08 21:54:39.435 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2

[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-08 Thread Jesse
** Also affects: magnum
   Importance: Undecided
   Status: New

** Changed in: magnum
 Assignee: (unassigned) => Jesse (boycht)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  New
Status in Ironic:
  In Progress
Status in ironic-python-agent:
  In Progress
Status in OpenStack Identity (keystone):
  New
Status in Magnum:
  New
Status in Mistral:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637483] [NEW] Add healthcheck middleware to pipelines

2016-10-28 Thread Jesse Keating
Public bug reported:

As an operator, I want to be able to use oslo healthcheck middleware[1]
to Nova's pipeline so that I can GET the /healthcheck URI to determine
"up" or not for a given nova-api host. The healthcheck middleware allows
for manually setting the health status to offline, without actually
stopping the API service. When I can do this, I can easily "take an API
node offline" from the aspect of the load balancer, which uses the
healthcheck path for status checks, before stopping the API process
itself. This is a quick and generic way to prevent new connections to a
given API node while restarting it as part of a rolling restart.

This middleware has already been added to glance[2], and I've got an
open review to add it to keystone as well[3]. My eventual goal is to
have it in use across all the services that directly listen for client
connections.

1: http://docs.openstack.org/developer/oslo.middleware/healthcheck_plugins.html
2: https://review.openstack.org/#/c/148595/
3: https://review.openstack.org/#/c/387731/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1637483

Title:
  Add healthcheck middleware to pipelines

Status in OpenStack Compute (nova):
  New

Bug description:
  As an operator, I want to be able to use oslo healthcheck
  middleware[1] to Nova's pipeline so that I can GET the /healthcheck
  URI to determine "up" or not for a given nova-api host. The
  healthcheck middleware allows for manually setting the health status
  to offline, without actually stopping the API service. When I can do
  this, I can easily "take an API node offline" from the aspect of the
  load balancer, which uses the healthcheck path for status checks,
  before stopping the API process itself. This is a quick and generic
  way to prevent new connections to a given API node while restarting it
  as part of a rolling restart.

  This middleware has already been added to glance[2], and I've got an
  open review to add it to keystone as well[3]. My eventual goal is to
  have it in use across all the services that directly listen for client
  connections.

  1: 
http://docs.openstack.org/developer/oslo.middleware/healthcheck_plugins.html
  2: https://review.openstack.org/#/c/148595/
  3: https://review.openstack.org/#/c/387731/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1637483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637146] [NEW] Whitelisting (opt-in) users/projects/domains for PCI compliance

2016-10-27 Thread Jesse Keating
Public bug reported:

As a cloud admin, I want to explicitly define which users should have
PCI compliance checks turned on. Currently, I can only blacklist certain
users, but I have use cases which require one special user (the super
duper admin) be held to a higher standard than the other users on a
cloud. I have other use cases where entire projects, or maybe even
domains, need to be held to a standard, but outside of those they should
not be held to the standard.

We provide individual private clouds to customers, and provide them a
lower level of admin access than super duper admin. Our own super duper
admin needs to adhere to PCI, but we do not feel it's appropriate to
enforce such requirements on the users our customers create for
themselves. That said, some customers may decide that some sets of the
users they create should require PCI compliance, but not all of them.
Because we do not control user creation, a blacklist is inappropriate as
it will constantly be behind.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1637146

Title:
  Whitelisting (opt-in) users/projects/domains for PCI compliance

Status in OpenStack Identity (keystone):
  New

Bug description:
  As a cloud admin, I want to explicitly define which users should have
  PCI compliance checks turned on. Currently, I can only blacklist
  certain users, but I have use cases which require one special user
  (the super duper admin) be held to a higher standard than the other
  users on a cloud. I have other use cases where entire projects, or
  maybe even domains, need to be held to a standard, but outside of
  those they should not be held to the standard.

  We provide individual private clouds to customers, and provide them a
  lower level of admin access than super duper admin. Our own super
  duper admin needs to adhere to PCI, but we do not feel it's
  appropriate to enforce such requirements on the users our customers
  create for themselves. That said, some customers may decide that some
  sets of the users they create should require PCI compliance, but not
  all of them. Because we do not control user creation, a blacklist is
  inappropriate as it will constantly be behind.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1637146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636495] [NEW] Failures during db_sync --contract during Mitaka to Newton (live) upgrade

2016-10-25 Thread Jesse Keating
Public bug reported:

I'm testing the live migration workflows moving from Mitaka to Newton.

In my scenario, I have two keystone server VM, and haproxy in front of
them. Both are running Mitaka and are fully functional. During the
upgrade I run Rally to repeatedly create and then delete users. Rally is
performing these 15 at a time as fast as possible.

During the upgrade, while both are still running Mitaka, I add the
Newton code to a new filesystem path, and I put the Newton version of
configuration in place. I perform db_sync --expand call which is
successful. I then perform a db_sync --migrate call, also successful
(successful means the db_sync command exited 0 and the user
creation/deletion does not experience a failure). Next I perform a
rolling restart of Keystone services, disabling each keystone from the
haproxy before restarting it. Again, success.

However, to finalize the upgrade, there is a db_sync call to --contract.
Both servers are running new Newton code at this point, and rally
continues. During the --contract call some small number of the user
create/delete actions are failing with:

2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters [req-88df4389
-f5f8-4332-a0d7-33eefa20a8ac 243ffe0eeb67454a83b8b0b21525bf3a f2970151809044f2aa
78fc75e43e3dc6 - default default] DBAPIError exception wrapped from (pymysql.err
.InternalError) (1054, u"Unknown column 'password.created_at' in 'field list'")
[SQL: u'SELECT password.id AS password_id, password.local_user_id AS password_lo
cal_user_id, password.password AS password_password, password.created_at AS pass
word_created_at, password.expires_at AS password_expires_at, password.self_servi
ce AS password_self_service, local_user_1.id AS local_user_1_id \nFROM (SELECT u
ser.id AS user_id \nFROM user \nWHERE user.id = %(param_1)s) AS anon_1 INNER JOI
N local_user AS local_user_1 ON anon_1.user_id = local_user_1.user_id INNER JOIN
 password ON local_user_1.id = password.local_user_id ORDER BY local_user_1.id,
password.created_at'] [parameters: {u'param_1': u'90602d52e1904e43bce1c2b82e46f2
6d'}]

2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters Traceback (mo
st recent call last):
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File "/opt/
openstack/current/keystone/local/lib/python2.7/site-packages/sqlalchemy/engine/b
ase.py", line 1139, in _execute_context
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters context)
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File "/opt/
openstack/current/keystone/local/lib/python2.7/site-packages/sqlalchemy/engine/d
efault.py", line 450, in do_execute
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters cursor.ex
ecute(statement, parameters)
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File "/opt/
openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 167, in execute
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 323, in _query
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 836, in query
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1020, in _read_query_result
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1303, in read
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 982, in _read_packet
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 394, in check_error
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
2016-10-25 10:53:22.797 13980 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/opt/openstack/current/keystone/local/lib/python2.7/site-packages/p

[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3

2016-10-06 Thread Jesse Pretorius
** No longer affects: openstack-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279611

Title:
   urlparse is incompatible for python 3

Status in Astara:
  Fix Committed
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in gce-api:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-doc-tools:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-cinderclient:
  Fix Committed
Status in python-neutronclient:
  Fix Released
Status in RACK:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Solar:
  Invalid
Status in storyboard:
  Fix Committed
Status in surveil:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in swift-bench:
  Fix Committed
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in vmware-nsx:
  Fix Committed
Status in zaqar:
  Fix Released
Status in Zuul:
  Fix Committed

Bug description:
  import urlparse

  should be changed to :
  import six.moves.urllib.parse as urlparse

  for python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1279611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612959] Re: neutron DB sync fails: ImportError: No module named tests

2016-09-05 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612959

Title:
  neutron DB sync fails: ImportError: No module named tests

Status in neutron:
  Fix Released
Status in openstack-ansible:
  Fix Released

Bug description:
  neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugin.ini upgrade head
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 
686, in main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 
205, in do_upgrade
  run_sanity_checks(config, revision)
File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 
670, in run_sanity_checks
  script_dir.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, 
in run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, 
in load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 23, in 
  from neutron.db.migration.models import head  # noqa
File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/models/head.py", line 
66, in 
  from neutron.tests import tools
  ImportError: No module named tests

  
  -

  The issue seems to be that in commit we started using code from
  neutron.tests outside of the testing code. Specifically commit
  7c0f189309789ebcbd5c20c5a86835576ffb5db3 now causes it to get used
  during DB sync. Given that some distribution packages don't package up
  the 'tests' code tree I think we shouldn't be using this code.

  See also:

  grep -lir neutron.tests * | grep -v tests
  cmd/sanity/checks.py
  db/migration/models/head.py
  hacking/checks.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population

2016-08-31 Thread Jesse Pretorius
** No longer affects: openstack-ansible

** No longer affects: openstack-ansible/trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523031

Title:
  Neighbor table entry for router missing with Linux bridge + L3HA + L2
  population

Status in neutron:
  New

Bug description:
  Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor
  table (ip neigh show) on the compute node lacks an entry for the
  router IP address. For example, using a router with 172.16.1.1 and
  instance with 172.16.1.4:

  On the node with the L3 agent containing the router:

  # ip neigh show
  169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT
  10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE
  10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data.
  ...

  On the node with the instance:

  # ip neigh show
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE
  10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE
  10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE

  172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the
  instance from within both DHCP agent namespaces.

  On the node with the instance, I manually add a neighbor entry for the
  router:

  # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466
  nud permanent

  On the node with the L3 agent containing the router:

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms
  64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms
  64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms
  64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605742] Re: Paramiko 2.0 is incompatible with Mitaka

2016-08-23 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605742

Title:
  Paramiko 2.0 is incompatible with Mitaka

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-ansible:
  Fix Released

Bug description:
  Unexpected API Error. TypeError. Code: 500. os-keypairs v2.1 
  nova (stable/mitaka , 98b38df57bfed3802ce60ee52e4450871fccdbfa) 

  Tempest tests (for example
  TestMinimumBasicScenario:test_minimum_basic_scenario) are failed on
  gate job for project openstack-ansible  with such error (please find
  full logs [1]) :

  -
  2016-07-22 18:46:07.399604 | 
  2016-07-22 18:46:07.399618 | Captured pythonlogging:
  2016-07-22 18:46:07.399632 | ~~~
  2016-07-22 18:46:07.399733 | 2016-07-22 18:45:47,861 2312 DEBUG
[tempest.scenario.manager] paths: img: 
/opt/images/cirros-0.3.4-x86_64-disk.img, container_fomat: bare, disk_format: 
qcow2, properties: None, ami: /opt/images/cirros-0.3.4-x86_64-blank.img, ari: 
/opt/images/cirros-0.3.4-x86_64-initrd, aki: 
/opt/images/cirros-0.3.4-x86_64-vmlinuz
  2016-07-22 18:46:07.399799 | 2016-07-22 18:45:48,513 2312 INFO 
[tempest.lib.common.rest_client] Request 
(TestMinimumBasicScenario:test_minimum_basic_scenario): 201 POST 
http://172.29.236.100:9292/v1/images 0.651s
  2016-07-22 18:46:07.399889 | 2016-07-22 18:45:48,513 2312 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'x-image-meta-name': 
'tempest-scenario-img--306818818', 'x-image-meta-container_format': 'bare', 
'X-Auth-Token': '', 'x-image-meta-disk_format': 'qcow2', 
'x-image-meta-is_public': 'False'}
  2016-07-22 18:46:07.399907 | Body: None
  2016-07-22 18:46:07.400027 | Response - Headers: {'status': '201', 
'content-length': '481', 'content-location': 
'http://172.29.236.100:9292/v1/images', 'connection': 'close', 'location': 
'http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe', 
'date': 'Fri, 22 Jul 2016 18:45:48 GMT', 'content-type': 'application/json', 
'x-openstack-request-id': 'req-6b3c6218-b3e6-4884-bb3c-b88c70733d0c'}
  2016-07-22 18:46:07.400183 | Body: {"image": {"status": "queued", 
"deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": 
"2016-07-22T18:45:48.00", "owner": "1fbbcc542db344f394b4f1565a7e48fd", 
"min_disk": 0, "is_public": false, "deleted_at": null, "id": 
"5c390277-ec8d-4d82-b8d8-b8978473ecbe", "size": 0, "virtual_size": null, 
"name": "tempest-scenario-img--306818818", "checksum": null, "created_at": 
"2016-07-22T18:45:48.00", "disk_format": "qcow2", "properties": {}, 
"protected": false}}
  2016-07-22 18:46:07.400241 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request: PUT 
http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400359 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request Headers: {'Transfer-Encoding': 'chunked', 
'User-Agent': 'tempest', 'Content-Type': 'application/octet-stream', 
'X-Auth-Token': 
'gABXkmnbJaM7C2EMxfEELQEWlU27v4pCt_9tF_XGlYrgEu-eXvDcEclzZc2OyFnVy79Dfz_pH2gGvKveSTihW-hzV6ucHyF1JrdqwOYr6Z7ZoUe_0BQ4gOdxKZoqzSaqQKfdfrZnojq9OE9Dy11frFI59qqkk0303j3fWlFIUeV6NtrzX-s'}
  2016-07-22 18:46:07.400403 | 2016-07-22 18:45:48,517 2312 DEBUG
[tempest.common.glance_http] Actual Path: 
/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400440 | 2016-07-22 18:45:50,721 2312 INFO 
[tempest.common.glance_http] Response Status: 200
  2016-07-22 18:46:07.400555 | 2016-07-22 18:45:50,722 2312 INFO 
[tempest.common.glance_http] Response Headers: [('date', 'Fri, 22 Jul 2016 
18:45:50 GMT'), ('content-length', '518'), ('etag', 
'ee1eca47dc88f4879d8a229cc70a07c6'), ('content-type', 'application/json'), 
('x-openstack-request-id', 'req-2e385c60-1755-4221-8325-caa98da1f760')]
  2016-07-22 18:46:07.400597 | 2016-07-22 18:45:50,723 2312 DEBUG
[tempest.scenario.manager] image:5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400669 | 2016-07-22 18:45:52,416 2312 INFO 
[tempest.lib.common.rest_client] Request 
(TestMinimumBasicScenario:test_minimum_basic_scenario): 500 POST 
http://172.29.236.100:8774/v2.1/1fbbcc542db344f394b4f1565a7e48fd/os-keypairs 
1.689s
  2016-07-22 18:46:07.400778 | 2016-07-22 18:45:52,416 2312 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  2016-07-22 18:46:07.400813 | Body: {"keypair": {"name": 
"tempest-TestMinimumBasicScenario-1803650811"}}
  2016-07-22 18:46:07.400940 | Response - Headers: {'status': '500', 
'content-length': '193', 'content-location': 
'http://172.29.236.100:8774/v2.1/1f

[Yahoo-eng-team] [Bug 1613299] Re: Unknown column 'r.project_id' in FWaaS migrations

2016-08-15 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
Milestone: None => newton-3

** Changed in: openstack-ansible
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1613299

Title:
  Unknown column 'r.project_id' in FWaaS migrations

Status in neutron:
  New
Status in openstack-ansible:
  New

Bug description:
  Running the FWaaS router insertion migration fails with:

  http://logs.openstack.org/01/354101/4/gate/gate-openstack-ansible-
  os_nova-ansible-func-ubuntu-
  trusty/4b14021/console.html#_2016-08-15_13_44_02_455515

  Specific issue: "oslo_db.exception.DBError:
  (pymysql.err.InternalError) (1054, u"Unknown column 'r.project_id' in
  'where clause'") [SQL: u'insert into firewall_router_associations
  select f.id as fw_id, r.id as router_id from firewalls f, routers r
  where f.tenant_id=r.project_id']"

  Issue occurs when installing Neutron master using OpenStack-Ansible on
  Ubuntu 14.04 from source.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1613299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7

2016-07-01 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272

Title:
  L3 HA: Unstable rescheduling time for keepalived v1.2.7

Status in neutron:
  Triaged
Status in openstack-ansible:
  New
Status in openstack-manuals:
  New

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
  1) ping vm by floating ip
  2) disable master l3-agent (which ha_state is active)
  3) wait for pings to continue and another agent became active
  4) check number of packages that were lost

  My results are  following:
  1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
  2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

  I should mention that in both cases there was only one ha router.

  It is expected that less packages will be lost when
  max_l3_agents_per_router=3(0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433172] Re: L3 HA routers master state flapping between nodes after router updates or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

2016-07-01 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433172

Title:
  L3 HA routers master state flapping between nodes after router updates
  or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

Status in neutron:
  Triaged
Status in openstack-ansible:
  New

Bug description:
  keepalived 1.2.14 introduced a regression when running it in no-preempt mode. 
More details here in a thread I started on the keepalived-devel list:
  http://sourceforge.net/p/keepalived/mailman/message/33604497/

  A fix was backported to 1.2.15-6, and is present in 1.2.16.

  Current status (Updated on the 30th of April, 2015):
  Fedora 20, 21 and 22 have 1.2.16.
  CentOS and RHEL are on 1.2.13

  Ubuntu is using 1.2.10 or older.
  Debian is using 1.2.13.

  In summary, as long as you're not using 1.2.14 or 1.2.15 (Excluding
  1.2.15-6), you're OK, which should be the case if you're using the
  latest keepalived packaged for your distro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension

2016-04-22 Thread Jesse Pretorius
** Changed in: openstack-ansible/trunk
   Status: Confirmed => Won't Fix

** Changed in: openstack-ansible/trunk
   Status: Won't Fix => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509312

Title:
  unable to use tenant network after kilo to liberty update due to port
  security extension

Status in neutron:
  Fix Released
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Fix Released
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  After updating to liberty from kilo all networks created in kilo
  release are useless in liberty.

  If i try to spawn a new isntance with a port on a network created in
  kilo i get the following error in nova-compute.log :

  BadRequest: Port does not have port security binding.

  I guess this has to do with the new extension in ml2 plugin
  port_security.

  Using neutron DVR on Ubuntu 14.04.3!

  This is my first bug report so sry in advance for any mistakes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population

2016-03-30 Thread Jesse Pretorius
** Changed in: openstack-ansible/trunk
Milestone: 13.0.0 => newton-1

** No longer affects: openstack-ansible/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523031

Title:
  Neighbor table entry for router missing with Linux bridge + L3HA + L2
  population

Status in neutron:
  New
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor
  table (ip neigh show) on the compute node lacks an entry for the
  router IP address. For example, using a router with 172.16.1.1 and
  instance with 172.16.1.4:

  On the node with the L3 agent containing the router:

  # ip neigh show
  169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT
  10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE
  10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data.
  ...

  On the node with the instance:

  # ip neigh show
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE
  10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE
  10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE

  172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the
  instance from within both DHCP agent namespaces.

  On the node with the instance, I manually add a neighbor entry for the
  router:

  # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466
  nud permanent

  On the node with the L3 agent containing the router:

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms
  64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms
  64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms
  64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2016-03-30 Thread Jesse Pretorius
As this patch has been included in Mitaka, I'm marking it as resolved
for the OpenStack-Ansible 13.0.0 release.

** Changed in: openstack-ansible/trunk
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  Using multiple api_workers, for "nova live-migration" command, 
  a) tunnel flows and tunnel ports are always removed from old host
  b) and other hosts(sometimes) not getting notification about port delete from 
old host. So in other hosts, tunnel ports and flood flows(except unicast flow 
about port) for old host still remain.
  Root cause and fix is explained in comments 12 and 13.

  According to bug reporter, this bug can also be reproducible like below.
  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller.

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560993] Re: keystone_service returns ignore_other_regions error in liberty

2016-03-29 Thread Jesse Pretorius
This doesn't appear to relate to any code used in OpenStack-Ansible as a
project. It does appear to be Ansible of some sort, and perhaps relates
to the Ansible modules. If that is so then the bug should be raised
against the Ansible project I guess.

** Changed in: openstack-ansible
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1560993

Title:
  keystone_service returns ignore_other_regions error in liberty

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-ansible:
  Invalid

Bug description:
  I am trying to port swiftacular from Havana to  liberty.

  The following line to create the service endpoint using keystone_service 
returns an error :
  - name: create keystone identity point
keystone_service: insecure=yes name=keystone type=identity 
description="Keystone Identity Service" publicurl="https://{{ keystone_server 
}}:5000/v2.0" internalurl="https://{{ keystone_server }}:5000/v2.0" 
adminurl="https://{{ keystone_server }}:35357/v2.0" region={{ keystone_region 
}} token={{ keystone_admin_token }} endpoint="https://127.0.0.1:35357/v2.0";

  returns the following error

  TASK [authentication : create keystone identity point] 
*
  fatal: [swift-keystone-01]: FAILED! => {"changed": false, "failed": true, 
"msg": "value of ignore_other_regions must be one of: 
yes,on,1,true,1,True,no,off,0,false,0,False, got: False"}
to retry, use: --limit @site.retry

  The same task worked without a hitch with havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1560993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561947] [NEW] Glance Store fails to authenticate to Swift with Keystone v3 API

2016-03-25 Thread Jesse Pretorius
Public bug reported:

In the Mitaka current RC for Glance, using HEAD of "stable/mitaka" as of 
23.03.2016 (SHA ab0562550c8c568dcdc7da68afdcac5f58d20e69), glance_store fails 
to authenticate via the
Keystone v3 API to Swift.

Configuration and the error are available here:
https://gist.github.com/odyssey4me/79a1e8d7dea35ddf818c

It appears that this may be a regression (this worked just fine in
Liberty) introduced by
https://github.com/openstack/glance_store/commit/1b782cee8552ec02f7303ee6f9ba9d1f2c180d07

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: openstack-ansible
 Importance: Critical
 Status: Confirmed


** Tags: mitaka-rc-potential

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
Milestone: None => 13.0.0

** Changed in: openstack-ansible
   Importance: Undecided => Critical

** Changed in: openstack-ansible
   Status: New => Confirmed

** Tags added: mitaka-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1561947

Title:
  Glance Store fails to authenticate to Swift with Keystone v3 API

Status in Glance:
  New
Status in openstack-ansible:
  Confirmed

Bug description:
  In the Mitaka current RC for Glance, using HEAD of "stable/mitaka" as of 
23.03.2016 (SHA ab0562550c8c568dcdc7da68afdcac5f58d20e69), glance_store fails 
to authenticate via the
  Keystone v3 API to Swift.

  Configuration and the error are available here:
  https://gist.github.com/odyssey4me/79a1e8d7dea35ddf818c

  It appears that this may be a regression (this worked just fine in
  Liberty) introduced by
  
https://github.com/openstack/glance_store/commit/1b782cee8552ec02f7303ee6f9ba9d1f2c180d07

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1561947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension

2016-03-19 Thread Jesse Pretorius
** Changed in: neutron
   Status: Expired => Confirmed

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
   Status: New => Confirmed

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
 Assignee: (unassigned) => Nolan Brubaker (nolan-brubaker)

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: High
 Assignee: Nolan Brubaker (nolan-brubaker)
   Status: Confirmed

** Changed in: openstack-ansible/liberty
   Importance: Undecided => High

** Changed in: openstack-ansible/liberty
   Status: New => Confirmed

** Changed in: openstack-ansible/liberty
 Assignee: (unassigned) => Nolan Brubaker (nolan-brubaker)

** Changed in: openstack-ansible/liberty
Milestone: None => 12.1.0

** Changed in: openstack-ansible/trunk
Milestone: None => 13.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509312

Title:
  unable to use tenant network after kilo to liberty update due to port
  security extension

Status in neutron:
  Confirmed
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  After updating to liberty from kilo all networks created in kilo
  release are useless in liberty.

  If i try to spawn a new isntance with a port on a network created in
  kilo i get the following error in nova-compute.log :

  BadRequest: Port does not have port security binding.

  I guess this has to do with the new extension in ml2 plugin
  port_security.

  Using neutron DVR on Ubuntu 14.04.3!

  This is my first bug report so sry in advance for any mistakes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension

2016-03-19 Thread Jesse Pretorius
** Changed in: openstack-ansible/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509312

Title:
  unable to use tenant network after kilo to liberty update due to port
  security extension

Status in neutron:
  In Progress
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Fix Released
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  After updating to liberty from kilo all networks created in kilo
  release are useless in liberty.

  If i try to spawn a new isntance with a port on a network created in
  kilo i get the following error in nova-compute.log :

  BadRequest: Port does not have port security binding.

  I guess this has to do with the new extension in ml2 plugin
  port_security.

  Using neutron DVR on Ubuntu 14.04.3!

  This is my first bug report so sry in advance for any mistakes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544801] Re: Constant tracebacks with eventlet 0.18.2

2016-02-12 Thread Jesse Keating
** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544801

Title:
  Constant tracebacks with eventlet 0.18.2

Status in Cinder:
  New
Status in Glance:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Kilo builds, with eventlet 0.18.2 have a constant traceback:

  2016-02-12 00:47:01.126 3936 DEBUG nova.api.openstack.wsgi [-] Calling method 
'>' _process_stack 
/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:783
  2016-02-12 00:47:01.129 3936 INFO nova.osapi_compute.wsgi.server [-] 
Traceback (most recent call last):
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 501, in handle_one_response
  write(b''.join(towrite))
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 442, in write
  _writelines(towrite)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 62, in safe_writelines
  writeall(fd, item)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 67, in writeall
  fd.write(buf)
File "/usr/lib/python2.7/socket.py", line 324, in write
  self.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
  self._sock.sendall(view[write_offset:write_offset+buffer_size])
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 383, in sendall
  tail = self.send(data, flags)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 377, in send
  return self._send_loop(self.fd.send, data, flags)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 364, in _send_loop
  return send_method(data, *args)
  error: [Errno 104] Connection reset by peer

  This is happening across nova, neutron, glance, etc..

  Dropping back to eventlet < 0.18.0 works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544801] [NEW] Constant tracebacks with eventlet 0.18.2

2016-02-11 Thread Jesse Keating
Public bug reported:

Kilo builds, with eventlet 0.18.2 have a constant traceback:

2016-02-12 00:47:01.126 3936 DEBUG nova.api.openstack.wsgi [-] Calling method 
'>' _process_stack 
/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:783
2016-02-12 00:47:01.129 3936 INFO nova.osapi_compute.wsgi.server [-] Traceback 
(most recent call last):
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 501, in handle_one_response
write(b''.join(towrite))
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 442, in write
_writelines(towrite)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 62, in safe_writelines
writeall(fd, item)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 67, in writeall
fd.write(buf)
  File "/usr/lib/python2.7/socket.py", line 324, in write
self.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 383, in sendall
tail = self.send(data, flags)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 377, in send
return self._send_loop(self.fd.send, data, flags)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 364, in _send_loop
return send_method(data, *args)
error: [Errno 104] Connection reset by peer

This is happening across nova, neutron, glance, etc..

Dropping back to eventlet < 0.18.0 works.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544801

Title:
  Constant tracebacks with eventlet 0.18.2

Status in OpenStack Compute (nova):
  New

Bug description:
  Kilo builds, with eventlet 0.18.2 have a constant traceback:

  2016-02-12 00:47:01.126 3936 DEBUG nova.api.openstack.wsgi [-] Calling method 
'>' _process_stack 
/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:783
  2016-02-12 00:47:01.129 3936 INFO nova.osapi_compute.wsgi.server [-] 
Traceback (most recent call last):
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 501, in handle_one_response
  write(b''.join(towrite))
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 442, in write
  _writelines(towrite)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 62, in safe_writelines
  writeall(fd, item)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 67, in writeall
  fd.write(buf)
File "/usr/lib/python2.7/socket.py", line 324, in write
  self.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
  self._sock.sendall(view[write_offset:write_offset+buffer_size])
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 383, in sendall
  tail = self.send(data, flags)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 377, in send
  return self._send_loop(self.fd.send, data, flags)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 364, in _send_loop
  return send_method(data, *args)
  error: [Errno 104] Connection reset by peer

  This is happening across nova, neutron, glance, etc..

  Dropping back to eventlet < 0.18.0 works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2016-02-04 Thread Jesse Pretorius
** Changed in: openstack-ansible/trunk
Milestone: mitaka-2 => mitaka-3

** No longer affects: openstack-ansible/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  In Progress
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  Using multiple api_workers, for "nova live-migration" command, 
  a) tunnel flows and tunnel ports are always removed from old host
  b) and other hosts(sometimes) not getting notification about port delete from 
old host. So in other hosts, tunnel ports and flood flows(except unicast flow 
about port) for old host still remain.
  Root cause and fix is explained in comments 12 and 13.

  According to bug reporter, this bug can also be reproducible like below.
  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller.

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521793] Re: l3ha with L2pop disabled breaks neutron

2016-01-06 Thread Jesse Pretorius
** Changed in: openstack-ansible/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521793

Title:
  l3ha with L2pop disabled breaks neutron

Status in neutron:
  New
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible liberty series:
  Fix Released
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  when using l3ha the system will fail to build a vm if L2 population is
  disabled under most circumstances. To resolve this issue the variable
  `neutron_l2_population` should be set to "true" by default. The
  current train of thought was that we'd use L3HA by default however due
  to current differences in the neutron linux bridge agent it seems that
  is impossible and will require additional upstream work within
  neutron. In the near term we should re-enable l2 pop by default and
  effectively disable the built in L3HA.

  This issue was reported in the channel by @Ville Vuorinen (IRC: kysse),
  see 
http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html
 from 18:47 onwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population

2015-12-08 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: New => Confirmed

** Changed in: openstack-ansible
   Importance: Undecided => Medium

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: Medium
   Status: Confirmed

** Changed in: openstack-ansible/liberty
   Status: New => Confirmed

** Changed in: openstack-ansible/liberty
   Importance: Undecided => Medium

** Changed in: openstack-ansible/liberty
Milestone: None => 12.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523031

Title:
  Neighbor table entry for router missing with Linux bridge + L3HA + L2
  population

Status in neutron:
  New
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor
  table (ip neigh show) on the compute node lacks an entry for the
  router IP address. For example, using a router with 172.16.1.1 and
  instance with 172.16.1.4:

  On the node with the L3 agent containing the router:

  # ip neigh show
  169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT
  10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE
  10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data.
  ...

  On the node with the instance:

  # ip neigh show
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE
  10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE
  10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE

  172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the
  instance from within both DHCP agent namespaces.

  On the node with the instance, I manually add a neighbor entry for the
  router:

  # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466
  nud permanent

  On the node with the L3 agent containing the router:

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms
  64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms
  64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms
  64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2015-12-08 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: New => Confirmed

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
Milestone: None => mitaka-2

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: High
   Status: Confirmed

** Changed in: openstack-ansible/liberty
   Importance: Undecided => High

** Changed in: openstack-ansible/liberty
   Status: New => Confirmed

** Changed in: openstack-ansible/liberty
Milestone: None => 12.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  In Progress
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:

  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller. 

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population

2015-12-07 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
Milestone: None => mitaka-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523031

Title:
  Neighbor table entry for router missing with Linux bridge + L3HA + L2
  population

Status in neutron:
  New
Status in openstack-ansible:
  New

Bug description:
  Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor
  table (ip neigh show) on the compute node lacks an entry for the
  router IP address. For example, using a router with 172.16.1.1 and
  instance with 172.16.1.4:

  On the node with the L3 agent containing the router:

  # ip neigh show
  169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT
  10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE
  10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data.
  ...

  On the node with the instance:

  # ip neigh show
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE
  10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE
  10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE

  172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the
  instance from within both DHCP agent namespaces.

  On the node with the instance, I manually add a neighbor entry for the
  router:

  # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466
  nud permanent

  On the node with the L3 agent containing the router:

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms
  64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms
  64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms
  64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521793] Re: Master/Liberty w/ L2pop disabled breaks neutron

2015-12-04 Thread Jesse Pretorius
** Also affects: neutron
   Importance: Undecided
   Status: New

** Summary changed:

- Master/Liberty w/ L2pop disabled breaks neutron
+ l3ha with L2pop disabled breaks neutron

** Description changed:

  when using l3ha the system will fail to build a vm if L2 population is
  disabled under most circumstances. To resolve this issue the variable
  `neutron_l2_population` should be set to "true" by default. The current
  train of thought was that we'd use L3HA by default however due to
  current differences in the neutron linux bridge agent it seems that is
  impossible and will require additional upstream work within neutron. In
  the near term we should re-enable l2 pop by default and effectively
  disable the built in L3HA.
  
- This issue was reported in the channel by @Ville Vuorinen (IRC: kysse)
+ This issue was reported in the channel by @Ville Vuorinen (IRC: kysse),
+ see 
http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html
 from 18:47 onwards.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521793

Title:
  l3ha with L2pop disabled breaks neutron

Status in neutron:
  New
Status in openstack-ansible:
  In Progress
Status in openstack-ansible liberty series:
  Triaged
Status in openstack-ansible trunk series:
  In Progress

Bug description:
  when using l3ha the system will fail to build a vm if L2 population is
  disabled under most circumstances. To resolve this issue the variable
  `neutron_l2_population` should be set to "true" by default. The
  current train of thought was that we'd use L3HA by default however due
  to current differences in the neutron linux bridge agent it seems that
  is impossible and will require additional upstream work within
  neutron. In the near term we should re-enable l2 pop by default and
  effectively disable the built in L3HA.

  This issue was reported in the channel by @Ville Vuorinen (IRC: kysse),
  see 
http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2015-12-01.log.html
 from 18:47 onwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2015-12-04 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  In Progress
Status in openstack-ansible:
  New

Bug description:

  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller. 

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0

2015-12-03 Thread Jesse Pretorius
** Changed in: openstack-ansible/liberty
   Status: Fix Committed => Fix Released

** Changed in: openstack-ansible/trunk
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1505326

Title:
  Unit tests failing with requests 2.8.0

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible kilo series:
  Fix Released
Status in openstack-ansible liberty series:
  Fix Released
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  
  When the tests are run, a bunch of them fail:

  pkg_resources.ContextualVersionConflict: (requests 2.8.0
  (/home/jenkins/workspace/gate-keystone-
  python27/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy']))

  global-requirements has requests!=2.8.0 , but something must be
  pulling in that version of requests!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade

2015-12-03 Thread Jesse Pretorius
** Changed in: openstack-ansible/liberty
   Status: Fix Committed => Fix Released

** Changed in: openstack-ansible/trunk
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with "AttributeError: id" in grenade

Status in Glance:
  Invalid
Status in keystonemiddleware:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible kilo series:
  Fix Released
Status in openstack-ansible liberty series:
  Fix Released
Status in openstack-ansible trunk series:
  Fix Released
Status in OpenStack-Gate:
  Fix Committed
Status in oslo.vmware:
  Fix Released
Status in python-glanceclient:
  In Progress

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 911, in dispatch
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2

[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0

2015-12-03 Thread Jesse Pretorius
** Changed in: openstack-ansible/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1505326

Title:
  Unit tests failing with requests 2.8.0

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-ansible:
  Fix Committed
Status in openstack-ansible kilo series:
  Fix Released
Status in openstack-ansible liberty series:
  Fix Committed
Status in openstack-ansible trunk series:
  Fix Committed

Bug description:
  
  When the tests are run, a bunch of them fail:

  pkg_resources.ContextualVersionConflict: (requests 2.8.0
  (/home/jenkins/workspace/gate-keystone-
  python27/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy']))

  global-requirements has requests!=2.8.0 , but something must be
  pulling in that version of requests!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade

2015-12-03 Thread Jesse Pretorius
** Changed in: openstack-ansible/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with "AttributeError: id" in grenade

Status in Glance:
  Invalid
Status in keystonemiddleware:
  Fix Released
Status in openstack-ansible:
  Fix Committed
Status in openstack-ansible kilo series:
  Fix Released
Status in openstack-ansible liberty series:
  Fix Committed
Status in openstack-ansible trunk series:
  Fix Committed
Status in OpenStack-Gate:
  Fix Committed
Status in oslo.vmware:
  Fix Released
Status in python-glanceclient:
  In Progress

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 911, in dispatch
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/no

[Yahoo-eng-team] [Bug 1515485] Re: Heat CFN signals do not pass authorization

2015-12-03 Thread Jesse Pretorius
** Changed in: openstack-ansible/kilo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1515485

Title:
  Heat CFN signals do not pass authorization

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) kilo series:
  Fix Committed
Status in openstack-ansible:
  Invalid
Status in openstack-ansible kilo series:
  Fix Released
Status in openstack-ansible liberty series:
  Invalid
Status in openstack-ansible trunk series:
  Invalid

Bug description:
  Note that this bug applies to the Kilo release. Master does not appear
  to have this problem. I did not test liberty yet.

  Heat templates that rely on CFN signals timeout because the API calls
  that execute these signals return 403 errors. Heat signals, on the
  other side, do work.

  The problem was reported to me by Alex Cantu. I have verified it on
  his multinode lab and have also reproduced on my own single-node
  system hosted on a public cloud server.  I suspect liberty/master
  avoided the problem after Jesse and I reworked the Heat configuration
  to use Keystone v3 the last day before the L release.

  Example template, which can be executed in an AIO after running the
  tempest playbook:

  heat_template_version: 2013-05-23

  resources:
    wait_condition:
  type: AWS::CloudFormation::WaitCondition
  properties:
    Handle: { get_resource: wait_handle }
    Count: 1
    Timeout: 600

    wait_handle:
  type: AWS::CloudFormation::WaitConditionHandle

    my_instance:
  type: OS::Nova::Server
  properties:
    image: cirros
    flavor: m1.tiny
    networks:
  - network: "private"
    user_data_format: RAW
    user_data:
  str_replace:
    template: |
  #!/bin/sh
  echo "wc_notify"
  curl -H "Content-Type:" -X PUT wc_notify --data-binary 
'{"status": "SUCCESS"}'
    params:
  wc_notify: { get_resource: wait_handle }

  This template should end very quickly, as it starts a cirros instance
  that just sends a signal back to heat. But instead, it timeouts. The
  user data script dumps the signal URL to the console log, if you then
  try to send the signal manually you will get a 403. The original 403
  can also be seen in the heat-api-cfn.log file. Here is the log
  snippet:

  2015-11-12 05:13:34.491 1862 INFO heat.api.aws.ec2token [-] Checking AWS 
credentials..
  2015-11-12 05:13:34.492 1862 INFO heat.api.aws.ec2token [-] AWS credentials 
found, checking against keystone.
  2015-11-12 05:13:34.493 1862 INFO heat.api.aws.ec2token [-] Authenticating 
with http://172.29.236.100:5000/v3/ec2tokens
  2015-11-12 05:13:34.533 1862 INFO heat.api.aws.ec2token [-] AWS 
authentication failure.
  2015-11-12 05:13:34.534 1862 INFO eventlet.wsgi.server [-] 
10.0.3.181,172.29.236.100 - - [12/Nov/2015 05:13:34] "PUT 
/v1/waitcondition/arn%3Aopenstack%3Aheat%3A%3A683acadf4d04489f8e991b44014e6fc1%3Astacks%2Fwc1%2Faa4083b6-ce6c-411f-9df9-d059abacf40c%2Fresources%2Fwait_handle?Timestamp=2015-11-12T05%3A12%3A27Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=65657d1021e24e49ba4fb6f217ca4a22&SignatureVersion=2&Signature=aCG%2FO04MNLzSlf5gIBGw1hMcC7bQzB3pZXVKzXLLNSo%3D
 HTTP/1.1" 403 301 0.043961

  For reference, the curl command to trigger the signal is: curl -H
  "Content-Type:" -X PUT "https://bugs.launchpad.net/keystone/+bug/1515485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0

2015-11-27 Thread Jesse Pretorius
** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/kilo
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: High
 Assignee: Jesse Pretorius (jesse-pretorius)
   Status: In Progress

** Changed in: openstack-ansible/trunk
Milestone: 12.1.0 => mitaka-1

** Changed in: openstack-ansible/liberty
Milestone: None => 12.0.2

** Changed in: openstack-ansible/kilo
Milestone: None => 11.2.6

** Changed in: openstack-ansible/liberty
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible/kilo
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible/liberty
   Importance: Undecided => High

** Changed in: openstack-ansible/kilo
   Importance: Undecided => High

** Changed in: openstack-ansible/liberty
   Status: New => In Progress

** Changed in: openstack-ansible/kilo
   Status: New => Fix Committed

** Changed in: openstack-ansible/trunk
   Status: In Progress => Fix Committed

** Changed in: openstack-ansible/liberty
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1505326

Title:
  Unit tests failing with requests 2.8.0

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-ansible:
  Fix Committed
Status in openstack-ansible kilo series:
  Fix Committed
Status in openstack-ansible liberty series:
  Fix Committed
Status in openstack-ansible trunk series:
  Fix Committed

Bug description:
  
  When the tests are run, a bunch of them fail:

  pkg_resources.ContextualVersionConflict: (requests 2.8.0
  (/home/jenkins/workspace/gate-keystone-
  python27/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy']))

  global-requirements has requests!=2.8.0 , but something must be
  pulling in that version of requests!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade

2015-11-27 Thread Jesse Pretorius
With keystonemiddleware 1.5.3 tagged, this will be included
automatically with the next tagged releases of OpenStack-Ansible.

Verified in Kilo with a recent build result:
http://logs.openstack.org/57/248557/2/gate/gate-openstack-ansible-dsvm-commit/de13bfd/console.html#_2015-11-26_15_30_45_573

** Also affects: openstack-ansible/kilo
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: High
 Assignee: Jesse Pretorius (jesse-pretorius)
   Status: In Progress

** Changed in: openstack-ansible/trunk
Milestone: 12.1.0 => mitaka-1

** Changed in: openstack-ansible/liberty
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible/kilo
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible/kilo
Milestone: None => 11.2.6

** Changed in: openstack-ansible/kilo
   Status: New => Fix Committed

** Changed in: openstack-ansible/kilo
   Importance: Undecided => High

** Changed in: openstack-ansible/liberty
   Importance: Undecided => High

** Changed in: openstack-ansible/liberty
Milestone: None => 12.0.2

** Changed in: openstack-ansible/liberty
   Status: New => Fix Committed

** Changed in: openstack-ansible/trunk
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with "AttributeError: id" in grenade

Status in Glance:
  Invalid
Status in keystonemiddleware:
  Fix Released
Status in openstack-ansible:
  Fix Committed
Status in openstack-ansible kilo series:
  Fix Committed
Status in openstack-ansible liberty series:
  Fix Committed
Status in openstack-ansible trunk series:
  Fix Committed
Status in OpenStack-Gate:
  Fix Committed
Status in oslo.vmware:
  Fix Released
Status in python-glanceclient:
  In Progress

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib

[Yahoo-eng-team] [Bug 1505295] Re: Tox tests failing with AttributeError

2015-11-23 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505295

Title:
  Tox tests failing with AttributeError

Status in Cinder:
  Fix Committed
Status in Designate:
  Fix Committed
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  In Progress
Status in openstack-ansible:
  Fix Released

Bug description:
  Currently all tests run in Jenkins python27 and python34 are failing
  with an AttributeError, saying that "'str' has no attribute 'DEALER'",
  as well as an AssertionError on assert TRANSPORT is not None in
  cinder/rpc.py.

  An example of the full traceback of the failure can be found here:

   http://paste.openstack.org/show/476040/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1505295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515485] Re: Heat CFN signals do not pass authorization

2015-11-20 Thread Jesse Pretorius
** Also affects: openstack-ansible/trunk
   Importance: Medium
   Status: Triaged

** Also affects: openstack-ansible/kilo
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible/trunk
   Status: Triaged => Invalid

** Changed in: openstack-ansible/liberty
   Status: New => Invalid

** Changed in: openstack-ansible/kilo
   Status: New => In Progress

** Changed in: openstack-ansible/kilo
   Status: In Progress => Triaged

** Changed in: openstack-ansible/kilo
Milestone: None => 11.2.5

** Changed in: openstack-ansible/kilo
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible/trunk
Milestone: 11.2.5 => None

** Changed in: openstack-ansible/kilo
Milestone: 11.2.5 => 11.2.6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1515485

Title:
  Heat CFN signals do not pass authorization

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) kilo series:
  Incomplete
Status in openstack-ansible:
  Invalid
Status in openstack-ansible kilo series:
  Triaged
Status in openstack-ansible liberty series:
  Invalid
Status in openstack-ansible trunk series:
  Invalid

Bug description:
  Note that this bug applies to the Kilo release. Master does not appear
  to have this problem. I did not test liberty yet.

  Heat templates that rely on CFN signals timeout because the API calls
  that execute these signals return 403 errors. Heat signals, on the
  other side, do work.

  The problem was reported to me by Alex Cantu. I have verified it on
  his multinode lab and have also reproduced on my own single-node
  system hosted on a public cloud server.  I suspect liberty/master
  avoided the problem after Jesse and I reworked the Heat configuration
  to use Keystone v3 the last day before the L release.

  Example template, which can be executed in an AIO after running the
  tempest playbook:

  heat_template_version: 2013-05-23

  resources:
    wait_condition:
  type: AWS::CloudFormation::WaitCondition
  properties:
    Handle: { get_resource: wait_handle }
    Count: 1
    Timeout: 600

    wait_handle:
  type: AWS::CloudFormation::WaitConditionHandle

    my_instance:
  type: OS::Nova::Server
  properties:
    image: cirros
    flavor: m1.tiny
    networks:
  - network: "private"
    user_data_format: RAW
    user_data:
  str_replace:
    template: |
  #!/bin/sh
  echo "wc_notify"
  curl -H "Content-Type:" -X PUT wc_notify --data-binary 
'{"status": "SUCCESS"}'
    params:
  wc_notify: { get_resource: wait_handle }

  This template should end very quickly, as it starts a cirros instance
  that just sends a signal back to heat. But instead, it timeouts. The
  user data script dumps the signal URL to the console log, if you then
  try to send the signal manually you will get a 403. The original 403
  can also be seen in the heat-api-cfn.log file. Here is the log
  snippet:

  2015-11-12 05:13:34.491 1862 INFO heat.api.aws.ec2token [-] Checking AWS 
credentials..
  2015-11-12 05:13:34.492 1862 INFO heat.api.aws.ec2token [-] AWS credentials 
found, checking against keystone.
  2015-11-12 05:13:34.493 1862 INFO heat.api.aws.ec2token [-] Authenticating 
with http://172.29.236.100:5000/v3/ec2tokens
  2015-11-12 05:13:34.533 1862 INFO heat.api.aws.ec2token [-] AWS 
authentication failure.
  2015-11-12 05:13:34.534 1862 INFO eventlet.wsgi.server [-] 
10.0.3.181,172.29.236.100 - - [12/Nov/2015 05:13:34] "PUT 
/v1/waitcondition/arn%3Aopenstack%3Aheat%3A%3A683acadf4d04489f8e991b44014e6fc1%3Astacks%2Fwc1%2Faa4083b6-ce6c-411f-9df9-d059abacf40c%2Fresources%2Fwait_handle?Timestamp=2015-11-12T05%3A12%3A27Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=65657d1021e24e49ba4fb6f217ca4a22&SignatureVersion=2&Signature=aCG%2FO04MNLzSlf5gIBGw1hMcC7bQzB3pZXVKzXLLNSo%3D
 HTTP/1.1" 403 301 0.043961

  For reference, the curl command to trigger the signal is: curl -H
  "Content-Type:" -X PUT "https://bugs.launchpad.net/keystone/+bug/1515485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505153] Re: gates broken by WebOb 1.5 release

2015-10-23 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505153

Title:
  gates broken by WebOb 1.5 release

Status in Cinder:
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  Fix Released

Bug description:
  Hi,

  WebOb 1.5 was released yesterday. test_misc of Cinder starts failing
  with this release. I wrote this simple fix which should be enough to
  repair it:

  https://review.openstack.org/233528
  "Fix test_misc for WebOb 1.5"

   class ConvertedException(webob.exc.WSGIHTTPException):
  -def __init__(self, code=0, title="", explanation=""):
  +def __init__(self, code=500, title="", explanation=""):

  Victor

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1505153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505677] Re: oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log

2015-10-23 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505677

Title:
  oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-
  conductor log

Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.versionedobjects:
  Fix Released

Bug description:
  In nova-conductor we're seeing the following error for stable/liberty:

  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, object_versions, args, kwargs)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if 
isinstance(result, nova_object.NovaObject) else result)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
535, in obj_to_primitive
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
version_manifest)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
507, in obj_make_compatible_from_manifest
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return 
self.obj_make_compatible(primitive, target_version)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, 
in obj_make_compatible
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
target_version)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in 
obj_make_compatible
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
rel_versions = self.obj_relationships['objects']
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 
'objects'
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher

  More details here:
  
http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505677] [NEW] oslo.versionedobjects 0.11.0 causing KeyError: 'objects' in nova-conductor log

2015-10-13 Thread Jesse Pretorius
Public bug reported:

In nova-conductor we're seeing the following error for stable/liberty:

2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, object_versions, args, kwargs)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 477, 
in object_class_action_versions
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher if 
isinstance(result, nova_object.NovaObject) else result)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
535, in obj_to_primitive
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
version_manifest)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
507, in obj_make_compatible_from_manifest
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher return 
self.obj_make_compatible(primitive, target_version)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/instance.py", line 1325, 
in obj_make_compatible
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
target_version)
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/objects/base.py", line 262, in 
obj_make_compatible
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
rel_versions = self.obj_relationships['objects']
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher KeyError: 
'objects'
2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher

More details here:
http://logs.openstack.org/56/233756/8/check/gate-openstack-ansible-dsvm-commit/879f745/logs/aio1_nova_conductor_container-5ec67682/nova-conductor.log

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: openstack-ansible
 Importance: Critical
 Assignee: Jesse Pretorius (jesse-pretorius)
 Status: Confirmed

** Affects: oslo.versionedobjects
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: oslo.versionedobjects
   Importance: Undecided
   Status: New

** Description changed:

  In nova-conductor we're seeing the following error for stable/liberty:
  
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/conductor/manager.py", line 937, 
in object_class_action_versions
  2015-10-12 23:29:14.413 2243 ERROR oslo_messaging.rpc.dispatcher context, 
objname, objmethod, 

[Yahoo-eng-team] [Bug 1505153] Re: gates broken by WebOb 1.5 release

2015-10-12 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
   Status: New => In Progress

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible
Milestone: None => 12.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505153

Title:
  gates broken by WebOb 1.5 release

Status in Cinder:
  Fix Committed
Status in Manila:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in openstack-ansible:
  In Progress

Bug description:
  Hi,

  WebOb 1.5 was released yesterday. test_misc of Cinder starts failing
  with this release. I wrote this simple fix which should be enough to
  repair it:

  https://review.openstack.org/233528
  "Fix test_misc for WebOb 1.5"

   class ConvertedException(webob.exc.WSGIHTTPException):
  -def __init__(self, code=0, title="", explanation=""):
  +def __init__(self, code=500, title="", explanation=""):

  Victor

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1505153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505326] Re: Unit tests failing with requests 2.8.0

2015-10-12 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
   Status: New => In Progress

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible
Milestone: None => 12.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505326

Title:
  Unit tests failing with requests 2.8.0

Status in Keystone:
  Confirmed
Status in openstack-ansible:
  In Progress

Bug description:
  
  When the tests are run, a bunch of them fail:

  pkg_resources.ContextualVersionConflict: (requests 2.8.0
  (/home/jenkins/workspace/gate-keystone-
  python27/.tox/py27/lib/python2.7/site-packages),
  Requirement.parse('requests!=2.8.0,>=2.5.2'), set(['oslo.policy']))

  global-requirements has requests!=2.8.0 , but something must be
  pulling in that version of requests!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade

2015-10-12 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
   Status: New => In Progress

** Changed in: openstack-ansible
Milestone: None => 12.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with "AttributeError: id" in grenade

Status in Glance:
  Invalid
Status in openstack-ansible:
  In Progress
Status in OpenStack-Gate:
  Fix Committed
Status in oslo.vmware:
  Fix Released
Status in python-glanceclient:
  New

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/a

[Yahoo-eng-team] [Bug 1505295] Re: Tox tests failing with AttributeError

2015-10-12 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
Milestone: None => 12.0.0

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505295

Title:
  Tox tests failing with AttributeError

Status in Cinder:
  New
Status in neutron:
  New
Status in openstack-ansible:
  In Progress
Status in oslo.messaging:
  New

Bug description:
  Currently all tests run in Jenkins python27 and python34 are failing
  with an AttributeError, saying that "'str' has no attribute 'DEALER'",
  as well as an AssertionError on assert TRANSPORT is not None in
  cinder/rpc.py.

  An example of the full traceback of the failure can be found here:

   http://paste.openstack.org/show/476040/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1505295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503423] [NEW] Build failures: device_id assigned as int instead of expected string

2015-10-06 Thread Jesse J. Cook
 in xenapi_request
 result = _parse_result(getattr(self, methodname)(*full_params))
   File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/XenAPI.py", 
line 203, in _parse_result
 raise Failure(result['ErrorDescription'])
 Failure: ['FIELD_TYPE_ERROR', 'platform']
 
... Terminating instance

** Affects: nova
 Importance: Undecided
 Assignee: Jesse J. Cook (jesse-j-cook)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1503423

Title:
  Build failures: device_id assigned as int instead of expected string

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  new metadata variable: xenapi_device_id integer, expected string:

  
  ... Failed to spawn, rolling back
   Traceback (most recent call last):
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 663, in _spawn
   kernel_file, ramdisk_file)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 214, in inner
   rv = f(*args, **kwargs)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 585, in create_vm_record_step
   ramdisk_file, image_meta, rescue)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 756, in _create_vm_record
   use_pv_kernel, device_id)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vm_utils.py",
 line 333, in create_vm
   vm_ref = session.VM.create(rec)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/objects.py",
 line 62, in 
   return lambda *params: self._call_method(method_name, *params)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/objects.py",
 line 59, in _call_method
   return self.session.call_xenapi(call, *args)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/session.py",
 line 212, in call_xenapi
   return session.xenapi_request(method, args)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/XenAPI.py", 
line 133, in xenapi_request
   result = _parse_result(getattr(self, methodname)(*full_params))
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/XenAPI.py", 
line 203, in _parse_result
   raise Failure(result['ErrorDescription'])
   Failure: ['FIELD_TYPE_ERROR', 'platform']

  ... Instance failed to spawn
   Traceback (most recent call last):
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2208, in _build_resources
   yield resources
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2061, in _build_and_run_instance
   block_device_info=block_device_info)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/driver.py",
 line 201, in spawn
   admin_password, network_info, block_device_info)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 510, in spawn
   network_info, block_device_info, name_label, rescue)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 681, in _spawn
   undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/utils.py",
 line 936, in rollback_and_reraise
   self._rollback()
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 195, in __exit__
   six.reraise(self.type_, self.value, self.tb)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 663, in _spawn
   kernel_file, ramdisk_file)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 214, in inner
   rv = f(*args, **kwargs)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 585, in create_vm_record_step
   ramdisk_file, image_meta, rescue)
 File 
"/opt/rackstack/rackstack.381.15/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 756, in _create_vm_record
   use_pv_kernel, device_id)
 File 
"/opt/racks

[Yahoo-eng-team] [Bug 1495755] [NEW] test_show_policy_failed fails (depending on another test to create db?)

2015-09-14 Thread Jesse J. Cook
Public bug reported:

Test fails sporadically or consistency depending on your setup. I could
make pass or fail consistently depending on level at which test was
executed. I expect test is depending on something done outside the test
that can occur out of order (i.e. db setup / table creation):

(dev)[~/src/rackspace/openstack/nova] ./run_tests.sh -d 
nova.tests.unit.api.openstack.compute.test_quota_classes.QuotaClassesPolicyEnforcementV21.test_show_policy_failed
Tests running...
nova/db/sqlalchemy/api.py:156: OsloDBDeprecationWarning: EngineFacade is 
deprecated; please use oslo.db.sqlalchemy.enginefacade
  retry_interval=conf_group.retry_interval)
==
ERROR: 
nova.tests.unit.api.openstack.compute.test_quota_classes.QuotaClassesPolicyEnforcementV21.test_show_policy_failed
--
Empty attachments:
  pythonlogging:''

Traceback (most recent call last):
  File "nova/tests/unit/api/openstack/compute/test_quota_classes.py", line 170, 
in setUp
extension_info=ext_info)
  File "nova/api/openstack/compute/quota_classes.py", line 45, in __init__
self.supported_quotas = QUOTAS.resources
  File "nova/quota.py", line 1473, in resources
self._register_resources_by_flavor(ctxt)
  File "nova/quota.py", line 1183, in _register_resources_by_flavor
flavors = db.flavor_get_all(ctxt, inactive=True)
  File "nova/db/api.py", line 1455, in flavor_get_all
sort_dir=sort_dir, limit=limit, marker=marker)
  File "nova/db/sqlalchemy/api.py", line 230, in wrapper
return f(*args, **kwargs)
  File "nova/db/sqlalchemy/api.py", line 4835, in flavor_get_all
inst_types = query.all()
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2399, in all
return list(self)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2516, in __iter__
return self._execute_and_instances(context)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2531, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
context)
  File 
"/home/jesse/src/rackspace/openstack/nova/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: 
instance_types [SQL: u'SELECT instance_types.created_at AS 
instance_types_created_at, instance_types.updated_at AS 
instance_types_updated_at, instance_types.deleted_at AS 
instance_types_deleted_at, instance_types.deleted AS instance_types_deleted, 
instance_types.id AS instance_types_id, instance_types.name AS 
instance_types_name, instance_types.memory_mb AS instance_types_memory_mb, 
instance_types.vcpus AS instance_types_vcpus, instance_types.root_gb AS 
instance_types_root_gb, instance_types.ephemeral_gb AS 
instance_types_ephemeral_gb, instance_types.flavorid AS 
instance_types_flavorid, instance_types.swap AS instance_types_swap, 
instance_types.rxtx_factor AS instance_types_rxtx_factor, 
instance_types.vcpu_weight AS instance_types_vcpu_weight, 
instance_types.disabled AS instance_types_disabled, instance_types.is_public AS 
instance_types_is_public, instance_type_e

[Yahoo-eng-team] [Bug 1488912] [NEW] Neutron: security-group-list missing parameters

2015-08-26 Thread Jesse Klint
Public bug reported:

Issue: the option "--tenant-id" is not mentioned in the usage for
`neutron security-group-list`. This can lead to confusion on how to
provide a tenant per "List security groups that belong to a given
tenant."


# neutron help security-group-list
usage: neutron security-group-list [-h] [-f {csv,table}] [-c COLUMN]
   [--max-width ]
   [--quote {all,minimal,none,nonnumeric}]
   [--request-format {json,xml}] [-D]
   [-F FIELD] [-P SIZE] [--sort-key FIELD]
   [--sort-dir {asc,desc}]

List security groups that belong to a given tenant.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  -D, --show-detailsShow detailed information.
  -F FIELD, --field FIELD
Specify the field(s) to be returned by server. You can
repeat this option.
  -P SIZE, --page-size SIZE
Specify retrieve unit of each request, then split one
request to several requests.
  --sort-key FIELD  Sorts the list by the specified fields in the
specified directions. You can repeat this option, but
you must specify an equal number of sort_dir and
sort_key values. Extra sort_dir options are ignored.
Missing sort_dir options use the default asc value.
  --sort-dir {asc,desc}
Sorts the list in the specified direction. You can
repeat this option.

output formatters:
  output formatter options

  -f {csv,table}, --format {csv,table}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width 
Maximum display width, 0 to disable

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
when to include quotes, defaults to nonnumeric

** Affects: python-neutronclient
 Importance: Undecided
 Status: New

** Project changed: bagpipe-l2 => neutron

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488912

Title:
  Neutron: security-group-list missing parameters

Status in python-neutronclient:
  New

Bug description:
  Issue: the option "--tenant-id" is not mentioned in the usage for
  `neutron security-group-list`. This can lead to confusion on how to
  provide a tenant per "List security groups that belong to a given
  tenant."

  
  # neutron help security-group-list
  usage: neutron security-group-list [-h] [-f {csv,table}] [-c COLUMN]
 [--max-width ]
 [--quote {all,minimal,none,nonnumeric}]
 [--request-format {json,xml}] [-D]
 [-F FIELD] [-P SIZE] [--sort-key FIELD]
 [--sort-dir {asc,desc}]

  List security groups that belong to a given tenant.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json,xml}
  The XML or JSON request format.
-D, --show-detailsShow detailed information.
-F FIELD, --field FIELD
  Specify the field(s) to be returned by server. You can
  repeat this option.
-P SIZE, --page-size SIZE
  Specify retrieve unit of each request, then split one
  request to several requests.
--sort-key FIELD  Sorts the list by the specified fields in the
  specified directions. You can repeat this option, but
  you must specify an equal number of sort_dir and
  sort_key values. Extra sort_dir options are ignored.
  Missing sort_dir options use the default asc value.
--sort-dir {asc,desc}
  Sorts the list in the specified direction. You can
  repeat this option.

  output formatters:
output formatter options

-f {csv,table}, --format {csv,table}
  the output format, defaults to table
-c COLUMN, --column COLUMN
  specify the column(s) to include, can be repeated

  table formatter:
--max-width 
  Maximum display width, 0 to disable

  CSV Formatter:
--quote {all,minimal,none,nonnumeric}
  when to include quotes, defaults to nonnumeric

To manage notifi

[Yahoo-eng-team] [Bug 1482403] [NEW] bdm for swap created with no regard for previous

2015-08-06 Thread Jesse J. Cook
Public bug reported:

Observed nova pass the correct device number during vbd creation to
xenserver, but the nova mapping created in the database was for the
wrong device.

A bdm for swap, based on the flavor, is created on
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L725,
but the swap is actually created on
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L746,
again based on flavor, but with no regard for the previously created bdm
for swap.

Perhaps we should implement default_device_names_for_instance() in xen
driver (as referenced on
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1631).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482403

Title:
  bdm for swap created with no regard for previous

Status in OpenStack Compute (nova):
  New

Bug description:
  Observed nova pass the correct device number during vbd creation to
  xenserver, but the nova mapping created in the database was for the
  wrong device.

  A bdm for swap, based on the flavor, is created on
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L725,
  but the swap is actually created on
  https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L746,
  again based on flavor, but with no regard for the previously created
  bdm for swap.

  Perhaps we should implement default_device_names_for_instance() in xen
  driver (as referenced on
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1631).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440762] Re: Rebuild an instance with attached volume fails

2015-07-28 Thread Jesse Pretorius
** No longer affects: openstack-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440762

Title:
  Rebuild an instance with attached volume fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  When trying to rebuild an instance with attached volume, it fails with
  the errors:

  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 
libvirtError: Failed to terminate process 22913 with SIGKILL: Device or 
resource busy
  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
  <180>Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
  <182>Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

  The full log of rebuild process is here:
  http://paste.openstack.org/show/166892/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470635] Re: endpoints added with v3 are not visible with v2

2015-07-09 Thread Jesse Pretorius
I can confirm that this is a problem, and I agree that endpoints created
using the v3 api really should be available via the v2 api.

** Changed in: keystone
   Status: New => Confirmed

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1470635

Title:
  endpoints added with v3 are not visible with v2

Status in OpenStack Identity (Keystone):
  Confirmed
Status in Ansible playbooks for deploying OpenStack:
  New
Status in Puppet module for Keystone:
  Confirmed

Bug description:
  Create an endpoint with v3::

  # openstack --os-identity-api-version 3 [--admin credentials]
  endpoint create 

  try to list endpoints with v2::

  # openstack --os-identity-api-version 2 [--admin credentials]
  endpoint list

  nothing.

  We are in the process of trying to convert puppet-keystone to v3 with
  the goal of maintaining backwards compatibility.  That means, we want
  admins/operators not to have to change any existing workflow.  This
  bug causes openstack endpoint list to return nothing which breaks
  existing workflows and backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1470635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471289] [NEW] Fernet tokens and Federated Identities result in token scope failures

2015-07-03 Thread Jesse Pretorius
Public bug reported:

When keystone is configured to use fernet tokens and also configured to
be a SP for an external IDP then the token data received by nova and
other services appear to not contain the right information, resulting in
errors from nova-api-os-compute such as:

Returning 400 to user: Malformed request URL: URL's project_id
'69f5cff441e04554b285d7772630dec1' doesn't match Context's project_id
'None'

When keystone is switched to use uuid tokens, then everything works as
expected.

Further debugging of the request to the nova api shows:

'HTTP_X_USER_DOMAIN_NAME': None,
'HTTP_X_DOMAIN_ID': None,
'HTTP_X_PROJECT_DOMAIN_ID': None,
'HTTP_X_ROLES': '',
'HTTP_X_TENANT_ID': None,
'HTTP_X_PROJECT_DOMAIN_NAME': None,
'HTTP_X_TENANT': None,
'HTTP_X_USER': u'S-1-5-21-2917001131-1385516553-613696311-1108',
'HTTP_X_USER_DOMAIN_ID': None,
'HTTP_X_AUTH_PROJECT_ID': '69f5cff441e04554b285d7772630dec1',
'HTTP_X_DOMAIN_NAME': None,
'HTTP_X_PROJECT_NAME': None,
'HTTP_X_PROJECT_ID': None,
'HTTP_X_USER_NAME': u'S-1-5-21-2917001131-1385516553-613696311-1108'

Comparing the interaction of nova-api-os-compute with keystone for the
token validation between an internal user and a federated user, the
following is seen:

### federated user ###
2015-07-03 14:43:05.229 8103 DEBUG keystoneclient.session [-] REQ: curl -g -i 
--insecure -X GET https://sp.testenvironment.local:5000/v3/auth/tokens -H 
"X-Subject-Token: {SHA1}acff9b5962270fec270e693eacb4c987c335f5c5" -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}a6a8a70ae39c533379eccd51b6d253f264d59f14" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:193
2015-07-03 14:43:05.265 8103 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 402 x-subject-token: 
{SHA1}acff9b5962270fec270e693eacb4c987c335f5c5 vary: X-Auth-Token keep-alive: 
timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) connection: Keep-Alive date: 
Fri, 03 Jul 2015 14:43:05 GMT content-type: application/json 
x-openstack-request-id: req-df3dce71-3174-4753-b883-11eb31a67d7c
RESP BODY: {"token": {"methods": ["token"], "expires_at": 
"2015-07-04T02:43:04.00Z", "extras": {}, "user": {"OS-FEDERATION": 
{"identity_provider": {"id": "adfs-idp"}, "protocol": {"id": "saml2"}, 
"groups": []}, "id": "S-1-5-21-2917001131-1385516553-613696311-1108", "name": 
"S-1-5-21-2917001131-1385516553-613696311-1108"}, "audit_ids": 
["_a6BbQ6mSoGAY2u9NN0tFA"], "issued_at": "2015-07-03T14:43:04.00Z"}}
 
### internal user ###
2015-07-03 14:28:31.875 8103 DEBUG keystoneclient.session [-] REQ: curl -g -i 
--insecure -X GET https://sp.testenvironment.local:5000/v3/auth/tokens -H 
"X-Subject-Token: {SHA1}b9c6748d65a0492faa9862fabf0a56fd5fdd255d" -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}a6a8a70ae39c533379eccd51b6d253f264d59f14" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:193
2015-07-03 14:28:31.949 8103 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 6691 x-subject-token: 
{SHA1}b9c6748d65a0492faa9862fabf0a56fd5fdd255d vary: X-Auth-Token keep-alive: 
timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) connection: Keep-Alive date: 
Fri, 03 Jul 2015 14:28:31 GMT content-type: application/json 
x-openstack-request-id: req-6e0ed9f4-46c3-4c79-b444-f72963fc9503
RESP BODY: {"token": {"methods": ["password"], "roles": [{"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "expires_at": 
"2015-07-04T02:28:31.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "0f491c8551c04cdc804a479af0bf13ec", "name": "demo"}, 
"catalog": "", "extras": {}, "user": {"domain": {"id": "default", 
"name": "Default"}, "id": "76c8c3017c954d88a6ad69ee4cb656d6", "name": "test"}, 
"audit_ids": ["aAN_V0c6SLSI0Rm1hoScCg"], "issued_at": 
"2015-07-03T14:28:31.00Z"}}

The data structures that come back from keystone are clearly quite
different.

### configuration environment ###

Ubuntu 14.04 OS
nova==12.0.0.0a1.dev51 # commit a4f4be370be06cfc9aa3ed30d2445277e832376f from 
master branch
keystone==8.0.0.0a1.dev12 # commit a7ca13b687dd284f0980d768b11a3d1b52b4106e 
from master branch
python-keystoneclient==1.6.1.dev19 # commit 
d238cc9af4927d1092de207db978536d712af129 from master branch
python-openstackclient==1.5.1.dev11# commit 
2d6bc8f4c38dbf997e3e71119f13f0328b4a8669 from master branch
python-novaclient==2.26.1.dev25 # commit 
3c2ff0faad8c84777ffe7d9946a1bc4486116084 from master branch
keystonemiddleware==2.0.0
oslo.concurrency==2.1.0
oslo.config==1.12.1
oslo.context==0.4.0
oslo.db==1.12.0
oslo.i18n==2.0.0
oslo.log==1.5.0
oslo.messaging==1.15.0
oslo.middleware==2.3.0
oslo.policy==0.6.0
oslo.serialization==1.6.0
oslo.utils==1.6.0

Keystone is configured as a Shibboleth SP with a trust relationship
between it and an ADFS IdP.

The mapping rules are setup as follows - note that the user's
default_project_id was added in an attempt to see whe

[Yahoo-eng-team] [Bug 1450874] [NEW] Delay in network access after instance resize/migration using linuxbridge and vlan

2015-05-01 Thread Jesse Keating
Public bug reported:

Performing an instance resize which migrates the instance to another
host. When the new instance gets built up, the new VIF gets plugged,
however connectivity to the IP is delayed. arping from the neutron
router gets no response for about a minute. Same with attempts to access
via a floating IP.

If a resize is reverted and the instance goes back to the original host,
connectivity is restored almost instantly.

I've included some neutron config, let me know if more is desired.

This is on Juno.

Neutron.conf (secrets munged):
[DEFAULT]
debug = False
verbose = True

# Logging #
log_dir = /var/log/neutron

agent_down_time = 20

api_workers = 3


auth_strategy = keystone
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
allow_overlapping_ips = False

rabbit_host = 10.233.19.1
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = MUNGE
rpc_backend = neutron.openstack.common.rpc.impl_kombu

bind_host = 0.0.0.0
bind_port = 9696

api_paste_config = api-paste.ini

control_exchange = neutron

notification_driver = neutron.openstack.common.notifier.no_op_notifier

notification_topics = notifications

lock_path = $state_path/lock

#  neutron nova interactions ==
notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True
nova_url = https://bbg-staging-01.openstack.blueboxgrid.com:8777/v2
nova_region_name = RegionOne
nova_admin_username = neutron
nova_admin_tenant_id = MUNGE
nova_admin_password = MUNGE
nova_admin_auth_url = https://bbg-staging-01.openstack.blueboxgrid.com:5001/v2.0
nova_ca_certificates_file = /etc/ssl/certs/ca-certificates.crt

[QUOTAS]

[DEFAULT_SERVICETYPE]

[SECURITYGROUP]

[AGENT]
report_interval = 4

[keystone_authtoken]
identity_uri = https://bbg-staging-01.openstack.blueboxgrid.com:35358
auth_uri = https://bbg-staging-01.openstack.blueboxgrid.com:5001/v2.0
admin_tenant_name = service
admin_user = neutron
admin_password = MUNGE
signing_dir = /var/cache/neutron/api
cafile = /etc/ssl/certs/ca-certificates.crt

[DATABASE]
sqlalchemy_pool_size = 60

l3_agent.ini:
[DEFAULT]
debug = False

state_path = /var/lib/neutron

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

auth_url = https://bbg-staging-01.openstack.blueboxgrid.com:35358/v2.0
admin_tenant_name = service
admin_user = neutron
admin_password = MUNGE
metadata_ip = bbg-staging-01.openstack.blueboxgrid.com
use_namespaces = True
external_network_bridge =

[AGENT]
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450874

Title:
  Delay in network access after instance resize/migration using
  linuxbridge and vlan

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Performing an instance resize which migrates the instance to another
  host. When the new instance gets built up, the new VIF gets plugged,
  however connectivity to the IP is delayed. arping from the neutron
  router gets no response for about a minute. Same with attempts to
  access via a floating IP.

  If a resize is reverted and the instance goes back to the original
  host, connectivity is restored almost instantly.

  I've included some neutron config, let me know if more is desired.

  This is on Juno.

  Neutron.conf (secrets munged):
  [DEFAULT]
  debug = False
  verbose = True

  # Logging #
  log_dir = /var/log/neutron

  agent_down_time = 20

  api_workers = 3

  
  auth_strategy = keystone
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
  allow_overlapping_ips = False

  rabbit_host = 10.233.19.1
  rabbit_port = 5672
  rabbit_userid = openstack
  rabbit_password = MUNGE
  rpc_backend = neutron.openstack.common.rpc.impl_kombu

  bind_host = 0.0.0.0
  bind_port = 9696

  api_paste_config = api-paste.ini

  control_exchange = neutron

  notification_driver = neutron.openstack.common.notifier.no_op_notifier

  notification_topics = notifications

  lock_path = $state_path/lock

  #  neutron nova interactions ==
  notify_nova_on_port_data_changes = True
  notify_nova_on_port_status_changes = True
  nova_url = https://bbg-staging-01.openstack.blueboxgrid.com:8777/v2
  nova_region_name = RegionOne
  nova_admin_username = neutron
  nova_admin_tenant_id = MUNGE
  nova_admin_password = MUNGE
  nova_admin_auth_url = 
https://bbg-staging-01.openstack.blueboxgrid.com:5001/v2.0
  nova_ca_certificates_file = /etc/ssl/certs/ca-certificates.crt

  [QUOTAS]

  [DEFAULT_SERVICETYPE]

  [SECURITYGROUP]

  [AGENT]
  report_interval = 4

  [keystone_authtoken]
  identity_uri = https://bbg-staging-01.openstack.blueboxgrid.com:35358
  auth_uri = https:/

[Yahoo-eng-team] [Bug 1444767] [NEW] scrubber edge cases orphan objects and records

2015-04-15 Thread Jesse J. Cook
Public bug reported:

The scrubber can leave orphaned objects and db records in error / edge
cases. This is because the order in which it updates the DB and object
store. Recommended solution:

For each image that has status pending_delete:
For each image location that has status pending_delete:
Delete the object in the object store
If error other than object not found, continue
Mark image location status as deleted
If all image locations are deleted, mark image as deleted
Else if no image locations are marked as pending_delete, change status to 
something else??? # I suppose it's possible an image_location would still be 
active or some other non-deleted status. I don't think we want to orphan the 
image_location by marking the image deleted in this case.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1444767

Title:
  scrubber edge cases orphan objects and records

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The scrubber can leave orphaned objects and db records in error / edge
  cases. This is because the order in which it updates the DB and object
  store. Recommended solution:

  For each image that has status pending_delete:
  For each image location that has status pending_delete:
  Delete the object in the object store
  If error other than object not found, continue
  Mark image location status as deleted
  If all image locations are deleted, mark image as deleted
  Else if no image locations are marked as pending_delete, change status to 
something else??? # I suppose it's possible an image_location would still be 
active or some other non-deleted status. I don't think we want to orphan the 
image_location by marking the image deleted in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1444767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438543] Re: wrong package name 'XStatic-Angular-Irdragndrop' in horizon/requirements.txt

2015-03-31 Thread Jesse Pretorius
** Also affects: xstatic-angular-irdragndrop
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1438543

Title:
  wrong package name 'XStatic-Angular-Irdragndrop' in
  horizon/requirements.txt

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in Xstatic Angular IrDragNDrop:
  New

Bug description:
  There's a wrong package name 'XStatic-Angular-Irdragndrop' in 
horizon/requirements.txt.
  It should be 'XStatic-Angular-lrdragndrop'.
  uppercase 'l'-rdragndrop instead of uppercase 'i'rdragndrop.

  This causes devstack fail because there's no such package in pypi.
  
--
  2015-03-31 05:40:50.388 |   Could not find any downloads that satisfy the 
requirement XStatic-Angular-Irdragndrop>=1.0.2.1 (from horizon==2015.1.dev110)
  2015-03-31 05:40:50.388 |   Some externally hosted files were ignored as 
access to them may be unreliable (use --allow-external 
XStatic-Angular-Irdragndrop to allow).
  2015-03-31 05:40:50.704 |   No distributions at all found for 
XStatic-Angular-Irdragndrop>=1.0.2.1 (from horizon==2015.1.dev110)
  
--

  and this bug is also logged at redhat
  https://bugzilla.redhat.com/show_bug.cgi?id=1196957

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1438543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407685] Re: New eventlet library breaks nova-manage

2015-02-06 Thread Jesse Pretorius
** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/juno
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/icehouse
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible/icehouse
   Importance: Undecided => High

** Changed in: openstack-ansible/juno
   Importance: Undecided => High

** Changed in: openstack-ansible/trunk
   Importance: Undecided => High

** Changed in: openstack-ansible/icehouse
Milestone: None => next

** Changed in: openstack-ansible/juno
Milestone: None => 10.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407685

Title:
  New eventlet library breaks nova-manage

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in Ansible playbooks for deploying OpenStack:
  New
Status in openstack-ansible icehouse series:
  New
Status in openstack-ansible juno series:
  New
Status in openstack-ansible trunk series:
  New

Bug description:
  This only affects stable/juno and stable/icehouse, which still use the
  deprecated eventlet.util module:

  ~# nova-manage service list
  2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] Could not load 
'file': cannot import name util
  2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] cannot import 
name util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension Traceback (most 
recent call last):
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py",
 line 162, in _load_plugins
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension 
verify_requirements,
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py",
 line 178, in _load_one_plugin
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension plugin = 
ep.load(require=verify_requirements)
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2306, in load
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension return 
self._load()
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2309, in _load
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/image/download/file.py",
 line 23, in 
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension import 
nova.virt.libvirt.utils as lv_utils
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py",
 line 15, in 
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from 
nova.virt.libvirt import driver
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 59, in 
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from eventlet 
import util as eventlet_util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension ImportError: cannot 
import name util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380776] [NEW] Uploading and downloading VHDs via Glance XenAPI plugin doesn't always retry when it should

2014-10-13 Thread Jesse J. Cook
Public bug reported:

Encountered a situation where one glance node could not talk to registry
which resulted in a high number of upload_vhd errors. The Glance XenAPI
plugin doesn't properly differentiate between server permanent and
globally permanent errors. This is only reasonable behavior in the case
where there is a single glance node. In the case of many glance nodes
retrying a different server is preferable.

Ideally:

Retry until:
1. A non-retryable error is encountered (e.g. 403)
2. Max retries is reached
3. No servers left to retry (i.e. every server was dropped from the retry list 
due to a permanent error)

If the glance nodes sit behind a load balancer (proxy), this approach
could result in the LB being treated as a single glance endpoint (no
retries for server errors). Retrying on server errors without dropping
servers with server errors from the list could result in unnecessary
retries, especially in the case where there is only a single glance
node.


Additionally, if multiple errors are encountered, only the last error is logged 
as an instance error. Every error should be recorded.


Examples:

Current:

* The plugin tries to upload using 1 of n glance nodes (n > 1)
* An ephemeral (retryable) error is encountered
* The plugin retries using a different glance node
* An error related to a server fault (e.g. 500) is encountered
* The plugin does not retry
* Instance fault

Expected:

* The plugin tries to upload using 1 of n glance nodes (n > 1)
* An ephemeral (retryable) error is encountered
* Instance fault
* The plugin retries using a different glance node
* An error related to a server fault (e.g. 500) is encountered
* The plugin retries using a different glance node
* Success

** Affects: nova
 Importance: Undecided
 Assignee: Jesse J. Cook (jesse-j-cook)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Jesse J. Cook (jesse-j-cook)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380776

Title:
  Uploading and downloading VHDs via Glance XenAPI plugin doesn't always
  retry when it should

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Encountered a situation where one glance node could not talk to
  registry which resulted in a high number of upload_vhd errors. The
  Glance XenAPI plugin doesn't properly differentiate between server
  permanent and globally permanent errors. This is only reasonable
  behavior in the case where there is a single glance node. In the case
  of many glance nodes retrying a different server is preferable.

  Ideally:

  Retry until:
  1. A non-retryable error is encountered (e.g. 403)
  2. Max retries is reached
  3. No servers left to retry (i.e. every server was dropped from the retry 
list due to a permanent error)

  If the glance nodes sit behind a load balancer (proxy), this approach
  could result in the LB being treated as a single glance endpoint (no
  retries for server errors). Retrying on server errors without dropping
  servers with server errors from the list could result in unnecessary
  retries, especially in the case where there is only a single glance
  node.

  
  Additionally, if multiple errors are encountered, only the last error is 
logged as an instance error. Every error should be recorded.

  
  Examples:

  Current:

  * The plugin tries to upload using 1 of n glance nodes (n > 1)
  * An ephemeral (retryable) error is encountered
  * The plugin retries using a different glance node
  * An error related to a server fault (e.g. 500) is encountered
  * The plugin does not retry
  * Instance fault

  Expected:

  * The plugin tries to upload using 1 of n glance nodes (n > 1)
  * An ephemeral (retryable) error is encountered
  * Instance fault
  * The plugin retries using a different glance node
  * An error related to a server fault (e.g. 500) is encountered
  * The plugin retries using a different glance node
  * Success

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379425] [NEW] REBOOTING_HARD not in expected_task_state

2014-10-09 Thread Jesse J. Cook
Public bug reported:

On hard reboot, the task state is set to REBOOTING_HARD
(https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2213).
This can be verified by rebooting twice in a row: {"conflictingRequest":
{"message": "Cannot 'reboot' while instance is in task_state
rebooting_hard", "code": 409}}. However, REBOOTING_HARD is not in
expected_task_state
(https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2215).
Shouldn't it be?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1379425

Title:
  REBOOTING_HARD not in expected_task_state

Status in OpenStack Compute (Nova):
  New

Bug description:
  On hard reboot, the task state is set to REBOOTING_HARD
  (https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2213).
  This can be verified by rebooting twice in a row:
  {"conflictingRequest": {"message": "Cannot 'reboot' while instance is
  in task_state rebooting_hard", "code": 409}}. However, REBOOTING_HARD
  is not in expected_task_state
  (https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2215).
  Shouldn't it be?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1379425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365637] [NEW] chunk sender does not send terminator on subprocess exception

2014-09-04 Thread Jesse J. Cook
Public bug reported:

If a subprocess exception occurs when uploading chunks via glance,
eventlet.wsgi.py will result in the following exception:  ValueError:
invalid literal for int() with base 16: ''. This happens because the
chunk sender does not send the terminator and the server reads an EOF on
client connection close instead of a properly formatted chunk.

** Affects: nova
 Importance: Undecided
 Assignee: Jesse J. Cook (jesse-j-cook)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Jesse J. Cook (jesse-j-cook)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365637

Title:
  chunk sender does not send terminator on subprocess exception

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If a subprocess exception occurs when uploading chunks via glance,
  eventlet.wsgi.py will result in the following exception:  ValueError:
  invalid literal for int() with base 16: ''. This happens because the
  chunk sender does not send the terminator and the server reads an EOF
  on client connection close instead of a properly formatted chunk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316233] Re: image data not deleted while deleting image in v2 api

2014-09-03 Thread Jesse J. Cook
Code worked correctly in attempts to manually reproduce. CNR.

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1316233

Title:
  image data not deleted while deleting image in v2 api

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  This is reopen for the bug 1039897 (
  https://bugs.launchpad.net/glance/+bug/1039897 ). Description as per
  the old bugs description.

  Seems the code has been changed since that got merged in and needs a
  fix again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1316233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352806] [NEW] virt: Mounting error in is_image_partitionless function when nova boot

2014-08-05 Thread Jesse
Public bug reported:

If we set
inject_partition = -1
inject_password = True
in nova.conf, nova boot will sucessfully inject password but there are errors 
when nova try to resize2fs the image:

4-08-05 11:58:01.230 DEBUG nova.virt.disk.api [req-ab691c7b-
4d50-4281-a1a4-a0de3d5d000a ] [admin admin] Unable to mount image
/opt/stack/data/nova/instances/7523c41f-9d70-41eb-95e9-7b0b1daa926b/disk
with error Error mounting /opt/stack/data/nova/instances/7523c41f-9d70
-41eb-95e9-7b0b1daa926b/disk with libguestfs (mount_options: /dev/sda on
/ (options: ''): mount: you must specify the filesystem type). Cannot
resize.from (pid=27490) is_image_partitionless
/opt/stack/nova/nova/virt/disk/api.py:218

The root cause it that there is a config: inject_partition to tell
guestfs the partition in the image disk.

cfg.IntOpt('inject_partition',
default=-2,
help='The partition to inject to : '
 '-2 => disable, -1 => inspect (libguestfs only), '
 '0 => not partitioned, >0 => partition number'),

when booting, there are 2 locations to invoke guestfs to load disk, but
the first do not pass the inject_partition parameter.

1. 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2316 
-> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2704 
-> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L180
 -> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L391
 -> https://github.com/openstack/nova/blob/master/nova/virt/disk/api.py#L162 -> 
https://github.com/openstack/nova/blob/master/nova/virt/disk/api.py#L211
2. 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2798 
-> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2610 
-> https://github.com/openstack/nova/blob/master/nova/virt/disk/api.py#L357

The first location  at 
https://github.com/openstack/nova/blob/master/nova/virt/disk/api.py#L211 just 
pass the None for partition:
fs = vfs.VFS.instance_for_image(image, 'qcow2', None)
which will make guestfs try to mount /dev/sda on /, this will make the Mounting 
error occured if we using cirros image.

To fix this issue we need pass the partition parameter which come from
CONF.libvirt.inject_partition like in
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2577


This bug related to https://bugs.launchpad.net/nova/+bug/1246852
and https://bugs.launchpad.net/nova/+bug/1279858
The bug: 1279858 told us that we should avoid guestfs loading disk twice. whick 
will make the booting slow.
The better approch is using cloud-init instead of guestfs.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352806

Title:
  virt: Mounting error in is_image_partitionless function when nova boot

Status in OpenStack Compute (Nova):
  New

Bug description:
  If we set
  inject_partition = -1
  inject_password = True
  in nova.conf, nova boot will sucessfully inject password but there are errors 
when nova try to resize2fs the image:

  4-08-05 11:58:01.230 DEBUG nova.virt.disk.api [req-ab691c7b-
  4d50-4281-a1a4-a0de3d5d000a ] [admin admin] Unable to mount image
  /opt/stack/data/nova/instances/7523c41f-9d70-41eb-
  95e9-7b0b1daa926b/disk with error Error mounting
  /opt/stack/data/nova/instances/7523c41f-9d70-41eb-
  95e9-7b0b1daa926b/disk with libguestfs (mount_options: /dev/sda on /
  (options: ''): mount: you must specify the filesystem type). Cannot
  resize.from (pid=27490) is_image_partitionless
  /opt/stack/nova/nova/virt/disk/api.py:218

  The root cause it that there is a config: inject_partition to tell
  guestfs the partition in the image disk.

  cfg.IntOpt('inject_partition',
  default=-2,
  help='The partition to inject to : '
   '-2 => disable, -1 => inspect (libguestfs only), '
   '0 => not partitioned, >0 => partition number'),

  when booting, there are 2 locations to invoke guestfs to load disk,
  but the first do not pass the inject_partition parameter.

  1. 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2316 
-> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2704 
-> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L180
 -> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L391
 -> https://github.com/openstack/nova/blob/master/nova/virt/disk/api.py#L162 -> 
https://github.com/openstack/nova/blob/master/nova/virt/disk/api.py#L211
  2. 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2798 
-> 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.

[Yahoo-eng-team] [Bug 1352768] [NEW] virt: error in log when log exception in guestfs.py

2014-08-05 Thread Jesse
Public bug reported:

In this code review, https://review.openstack.org/#/c/104262/  brings an error 
in log because of the line at 137:
+ LOG.info(_LI("Unable to force TCG mode, libguestfs too old?"),
+ ex)

Error is:

Traceback (most recent call last):
  File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 685, in format
return logging.StreamHandler.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 649, in format
return logging.Formatter.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
  File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file guestfs.py, line 139

To fix this issue, we just need to add %s

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352768

Title:
  virt: error in log when log exception in guestfs.py

Status in OpenStack Compute (Nova):
  New

Bug description:
  In this code review, https://review.openstack.org/#/c/104262/  brings an 
error in log because of the line at 137:
  + LOG.info(_LI("Unable to force TCG mode, libguestfs too old?"),
  + ex)

  Error is:

  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
  msg = self.format(record)
File "/opt/stack/nova/nova/openstack/common/log.py", line 685, in format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
  return fmt.format(record)
File "/opt/stack/nova/nova/openstack/common/log.py", line 649, in format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
  TypeError: not all arguments converted during string formatting
  Logged from file guestfs.py, line 139

  To fix this issue, we just need to add %s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279942] [NEW] vm_vdi_cleaner.py fails to launch with NOTIFIER is not None AssertionError

2014-02-13 Thread Jesse Keating
Public bug reported:

I believe this has something to do with oslo messaging, in that it's
attempting to set up a NOTIFIER and TRANSPORT for nova messaging, and
failing.

vm_vdi_cleaner.py -v --command=list-vdis --config-file /etc/nova/nova.conf
Traceback (most recent call last):
  File "/tmp/vm_vdi_cleaner.py", line 328, in 
main()
  File "/tmp/vm_vdi_cleaner.py", line 302, in main
xenapi = xenapi_driver.XenAPIDriver(virtapi.VirtAPI())
  File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/virt/xenapi/driver.py",
 line 159, in __init__
self._host = host.Host(self._session, self.virtapi)
  File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/virt/xenapi/host.py",
 line 43, in __init__
self._conductor_api = conductor.API()
  File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/conductor/__init__.py",
 line 26, in API
return api(*args, **kwargs)
  File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/conductor/api.py",
 line 58, in __init__
self._manager = utils.ExceptionHelper(manager.ConductorManager())
  File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/conductor/manager.py",
 line 83, in __init__
*args, **kwargs)
  File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/manager.py", line 
75, in __init__
self.notifier = rpc.get_notifier(self.service_name, self.host)
  File "/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/rpc.py", 
line 138, in get_notifier
assert NOTIFIER is not None
AssertionError

This is with master as of bf0c24c070219a38af690ae1412a91703da78a86

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279942

Title:
  vm_vdi_cleaner.py fails to launch with NOTIFIER is not None
  AssertionError

Status in OpenStack Compute (Nova):
  New

Bug description:
  I believe this has something to do with oslo messaging, in that it's
  attempting to set up a NOTIFIER and TRANSPORT for nova messaging, and
  failing.

  vm_vdi_cleaner.py -v --command=list-vdis --config-file /etc/nova/nova.conf
  Traceback (most recent call last):
File "/tmp/vm_vdi_cleaner.py", line 328, in 
  main()
File "/tmp/vm_vdi_cleaner.py", line 302, in main
  xenapi = xenapi_driver.XenAPIDriver(virtapi.VirtAPI())
File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/virt/xenapi/driver.py",
 line 159, in __init__
  self._host = host.Host(self._session, self.virtapi)
File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/virt/xenapi/host.py",
 line 43, in __init__
  self._conductor_api = conductor.API()
File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/conductor/__init__.py",
 line 26, in API
  return api(*args, **kwargs)
File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/conductor/api.py",
 line 58, in __init__
  self._manager = utils.ExceptionHelper(manager.ConductorManager())
File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/conductor/manager.py",
 line 83, in __init__
  *args, **kwargs)
File 
"/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/manager.py", line 
75, in __init__
  self.notifier = rpc.get_notifier(self.service_name, self.host)
File "/opt/rackstack/current/nova/lib/python2.6/site-packages/nova/rpc.py", 
line 138, in get_notifier
  assert NOTIFIER is not None
  AssertionError

  This is with master as of bf0c24c070219a38af690ae1412a91703da78a86

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276639] [NEW] block live migration does not work when a volume is attached

2014-02-05 Thread Jesse Pretorius
Public bug reported:

Environment:
 - Two compute nodes, running Ubuntu 12.04 LTS
 - KVM Hypervisor
 - Ceph (dumpling) back-end for Cinder
 - Grizzly-level Openstack

Steps to reproduce:
 1) Create instance and volume
 2) Attach volume to instance
 3) Attempt a block migration between compute nodes - eg: nova live-migration 
--block-migrate 9b85b983-dced-4574-b14c-c72e4d92982a

Packages:
ii  ceph 0.67.5-1precise
ii  ceph-common  0.67.5-1precise
ii  ceph-fs-common   0.67.5-1precise
ii  ceph-fuse0.67.5-1precise
ii  ceph-mds 0.67.5-1precise
ii  curl 7.29.0-1precise.ceph
ii  kvm  1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu14.13
ii  kvm-ipxe 1.0.0+git-3.55f6c88-0ubuntu1
ii  libcephfs1   0.67.5-1precise
ii  libcurl3 7.29.0-1precise.ceph
ii  libcurl3-gnutls  7.29.0-1precise.ceph
ii  libleveldb1  1.12.0-1precise.ceph
ii  nova-common  1:2013.1.4-0ubuntu1~cloud0
ii  nova-compute 1:2013.1.4-0ubuntu1~cloud0
ii  nova-compute-kvm 1:2013.1.4-0ubuntu1~cloud0
ii  python-ceph  0.67.5-1precise
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
ii  python-nova  1:2013.1.4-0ubuntu1~cloud0
ii  python-novaclient1:2.13.0-0ubuntu1~cloud0
ii  qemu-common  1.0+noroms-0ubuntu14.13
ii  qemu-kvm 1.0+noroms-0ubuntu14.13
ii  qemu-utils   1.0+noroms-0ubuntu14.13
ii  libvirt-bin  1.0.2-0ubuntu11.13.04.5~cloud1
ii  libvirt0 1.0.2-0ubuntu11.13.04.5~cloud1
ii  python-libvirt   1.0.2-0ubuntu11.13.04.5~cloud1

/var/log/nova/nova-compute on source:

2014-02-05 16:36:46.014 998 INFO nova.compute.manager [-] Lifecycle event 2 on 
VM 9b85b983-dced-4574-b14c-c72e4d92982a
2014-02-05 16:36:46.233 998 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has 
a pending task. Skip.
2014-02-05 16:36:46.234 998 INFO nova.compute.manager [-] Lifecycle event 2 on 
VM 9b85b983-dced-4574-b14c-c72e4d92982a
2014-02-05 16:36:46.468 998 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has 
a pending task. Skip.
2014-02-05 16:41:09.029 998 INFO nova.compute.manager [-] Lifecycle event 1 on 
VM 9b85b983-dced-4574-b14c-c72e4d92982a
2014-02-05 16:41:09.265 998 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During sync_power_state the instance has 
a pending task. Skip.
2014-02-05 16:41:09.640 998 ERROR nova.virt.libvirt.driver [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] Live Migration failure: Unable to read 
from monitor: Connection reset by peer
2014-02-05 16:41:12.165 998 WARNING nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] Instance shutdown by itself. Calling the 
stop API.
2014-02-05 16:41:12.398 998 INFO nova.virt.libvirt.driver [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] Instance destroyed successfully.

/var/log/libvirt/libvirtd.log on source:

2014-02-05 14:41:07.607+: 3437: error : qemuMonitorIORead:502 : Unable to 
read from monitor: Connection reset by peer
2014-02-05 14:41:09.633+: 3441: error : 
virNetClientProgramDispatchError:175 : An error occurred, but the cause is 
unknown
2014-02-05 14:41:09.634+: 3441: error : 
qemuDomainObjEnterMonitorInternal:997 : operation failed: domain is no longer 
running
2014-02-05 14:41:09.634+: 3441: warning : doPeer2PeerMigrate3:2872 : Guest 
instance-0315 probably left in 'paused' state on source

/var/log/nova/nova-compute.log on target:

2014-02-05 16:36:38.841 INFO nova.virt.libvirt.driver 
[req-0f0eaabf-9e29-4d45-88c9-20194be51d49 aaf3e92b69e04958b43348677ab7b38b 
1859d80f51ff4180b591f7fe2668fd68] Instance launched has CPU info:
{"vendor": "Intel", "model": "SandyBridge", "arch": "x86_64", "features": 
["pdpe1gb", "osxsave", "dca", "pcid", "pdcm", "xtpr", "tm2", "est", "smx", 
"vmx", "ds_cpl", "monitor", "dtes64", "pbe", "tm", "ht", "ss", "acpi", "ds", 
"vme"], "topology": {"cores": 6, "threads": 2, "sockets": 1}}
2014-02-05 16:36:46.008 28458 INFO nova.compute.manager [-] Lifecycle event 0 
on VM 9b85b983-dced-4574-b14c-c72e4d92982a
2014-02-05 16:36:46.244 28458 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During the sync_power process the 
instance has moved from host ctpcmp003 to host ctpcmp005
2014-02-05 16:41:09.634 28458 INFO nova.compute.manager [-] Lifecycle event 1 
on VM 9b85b983-dced-4574-b14c-c72e4d92982a
2014-02-05 16:41:09.899 28458 INFO nova.compute.manager [-] [instance: 
9b85b983-dced-4574-b14c-c72e4d92982a] During the sync_power

[Yahoo-eng-team] [Bug 1269795] [NEW] Port tags not reliably implementing

2014-01-16 Thread Jesse Pretorius
Public bug reported:

Environment:
 - Ubuntu 12.04.3 LTS
 - Grizzly 2013.1.3-0ubuntu1~cloud0
 - Quantum with GRE Tunneling
 - OpenVSwitch 1.4.0-1ubuntu1.5

I'm getting inconsistent implementations of port tags for the Router
internal interfaces and the DHCP's interface when they're created in
OVS.

For example, what I should be seeing is something like this:

Port "tap7ef1ee95-52"
tag: 30
Interface "tap7ef1ee95-52"
type: internal
Port "qr-8bfc6675-3a"
tag: 13
Interface "qr-8bfc6675-3a"
type: internal

However, I end up seeing something like this:

Port "tap2b520e87-5e"
Interface "tap2b520e87-5e"
type: internal
Port "qr-ba0036f3-7e"
Interface "qr-ba0036f3-7e"
type: internal

It's not consistently happening - sometimes it actually is done
correctly.

The workaround to repair this is either to manually tag the interfaces,
which can be done if at least one of them was tagged, or to restart
'quantum-plugin-openvswitch-agent', which unfortunately causes a drop in
connectivity for those which were correctly tagged.

Does anyone know under which conditions this issue may occur and whether
there are better workarounds?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269795

Title:
  Port tags not reliably implementing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Environment:
   - Ubuntu 12.04.3 LTS
   - Grizzly 2013.1.3-0ubuntu1~cloud0
   - Quantum with GRE Tunneling
   - OpenVSwitch 1.4.0-1ubuntu1.5

  I'm getting inconsistent implementations of port tags for the Router
  internal interfaces and the DHCP's interface when they're created in
  OVS.

  For example, what I should be seeing is something like this:

  Port "tap7ef1ee95-52"
  tag: 30
  Interface "tap7ef1ee95-52"
  type: internal
  Port "qr-8bfc6675-3a"
  tag: 13
  Interface "qr-8bfc6675-3a"
  type: internal

  However, I end up seeing something like this:

  Port "tap2b520e87-5e"
  Interface "tap2b520e87-5e"
  type: internal
  Port "qr-ba0036f3-7e"
  Interface "qr-ba0036f3-7e"
  type: internal

  It's not consistently happening - sometimes it actually is done
  correctly.

  The workaround to repair this is either to manually tag the
  interfaces, which can be done if at least one of them was tagged, or
  to restart 'quantum-plugin-openvswitch-agent', which unfortunately
  causes a drop in connectivity for those which were correctly tagged.

  Does anyone know under which conditions this issue may occur and
  whether there are better workarounds?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260454] [NEW] Add cinder 'extend' volume functionality

2013-12-12 Thread Jesse Pretorius
Public bug reported:

Cinder now has the ability to 'extend' (ie grow/expand/resize up) a
volume. This functionality should be exposed through Horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260454

Title:
  Add cinder 'extend' volume functionality

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Cinder now has the ability to 'extend' (ie grow/expand/resize up) a
  volume. This functionality should be exposed through Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260281] [NEW] Rendering of dashboard is broken in Internet Explorer

2013-12-12 Thread Jesse Pretorius
Public bug reported:

In Internet Explorer (tested with IE9 and IE10) the rendering of various
dashboard components is broken.

 - Content section is shown below the left hand navigation menu most often, 
unless you have a super-wide screen
 - Network Topology network names do not display inside the vertical network bar
 - The rounded edges do not render
 - The buttons look funny

While I realise that some of these are due to differences in the way
that IE renders CSS we do feel that it's important to ensure that using
IE for Openstack End-Users and Administrators gives a reasonable
experience.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260281

Title:
  Rendering of dashboard is broken in Internet Explorer

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Internet Explorer (tested with IE9 and IE10) the rendering of
  various dashboard components is broken.

   - Content section is shown below the left hand navigation menu most often, 
unless you have a super-wide screen
   - Network Topology network names do not display inside the vertical network 
bar
   - The rounded edges do not render
   - The buttons look funny

  While I realise that some of these are due to differences in the way
  that IE renders CSS we do feel that it's important to ensure that
  using IE for Openstack End-Users and Administrators gives a reasonable
  experience.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260274] [NEW] NoVNC Console not showing in Internet Explorer

2013-12-12 Thread Jesse Pretorius
Public bug reported:

When accessing the NoVNC console through Internet Explorer (tested with
IE9 and IE10) the HTML5 Canvas never renders, instead showing 'Canvas
not supported'.

Environment:
 - OS: Ubuntu 12.04 LTS
 - Platform: Openstack Grizzly
 - Packages:
  nova-novncproxy 1:2013.1.3-0ubuntu1~cloud0
  novnc 2012.2~20120906+dfsg-0ubuntu4~cloud0
  python-novnc 2012.2~20120906+dfsg-0ubuntu4~cloud0

According to https://github.com/kanaka/noVNC/wiki/Browser-support NoVNC
should work with IE9 and above.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260274

Title:
  NoVNC Console not showing in Internet Explorer

Status in OpenStack Compute (Nova):
  New

Bug description:
  When accessing the NoVNC console through Internet Explorer (tested
  with IE9 and IE10) the HTML5 Canvas never renders, instead showing
  'Canvas not supported'.

  Environment:
   - OS: Ubuntu 12.04 LTS
   - Platform: Openstack Grizzly
   - Packages:
nova-novncproxy 1:2013.1.3-0ubuntu1~cloud0
novnc 2012.2~20120906+dfsg-0ubuntu4~cloud0
python-novnc 2012.2~20120906+dfsg-0ubuntu4~cloud0

  According to https://github.com/kanaka/noVNC/wiki/Browser-support
  NoVNC should work with IE9 and above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp