[Yahoo-eng-team] [Bug 1478656] [NEW] Non-numeric filenames in key_repository will make Keystone explode

2015-07-27 Thread Clint Byrum
Public bug reported:

If one makes any files in that directory, such as an editor backup,
Keystone will explode on startup or at the next key rotation because it
assumes all files will pass int(filename)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478656

Title:
  Non-numeric filenames in key_repository will make Keystone explode

Status in Keystone:
  New

Bug description:
  If one makes any files in that directory, such as an editor backup,
  Keystone will explode on startup or at the next key rotation because
  it assumes all files will pass int(filename)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470950] Re: Version of ironicclient in vivid/UCA not sufficient to interact properly with Nova ironic virt driver

2015-07-02 Thread Clint Byrum
Note that I've submitted a fix to kilo/stable requirements:

https://review.openstack.org/#/c/198072/

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470950

Title:
  Version of ironicclient in vivid/UCA not sufficient to interact
  properly with Nova ironic virt driver

Status in OpenStack Compute (Nova):
  New
Status in python-ironicclient package in Ubuntu:
  Fix Released
Status in python-ironicclient source package in Vivid:
  Triaged

Bug description:
  The nova ironic driver in Kilo was updated to use a new parameter in
  ironicclient, configdrive, which was added in v0.4.0 of python-
  ironicclient

  Test case

  #. Install kilo nova and kilo ironic
  #. Configure nova to use ironic virt
  #. nova boot instance
  #. Observe failure in nova-compute log showing incorrect configdrive 
parameter being used.
  #. update ironicclient to =0.4.1
  #. restart nova-compute
  #. nova boot again
  #. instance should boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404354] [NEW] SAWarning using empty IN on assignment.actor_id

2014-12-19 Thread Clint Byrum
Public bug reported:

Dec 18 02:51:28 ci-overcloud-controller0-nvacfowhjcy2 keystone:
/opt/stack/venvs/openstack/local/lib/python2.7/site-
packages/sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-
predicate on assignment.actor_id was invoked with an empty sequence.
This results in a contradiction, which nonetheless can be expensive to
evaluate.  Consider alternative strategies for improved performance.


This should be fairly straight forward to address.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1404354

Title:
  SAWarning using empty IN on assignment.actor_id

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Dec 18 02:51:28 ci-overcloud-controller0-nvacfowhjcy2 keystone:
  /opt/stack/venvs/openstack/local/lib/python2.7/site-
  packages/sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-
  predicate on assignment.actor_id was invoked with an empty sequence.
  This results in a contradiction, which nonetheless can be expensive to
  evaluate.  Consider alternative strategies for improved performance.

  
  This should be fairly straight forward to address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1404354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371801] Re: Clouds with high usage gets slower because of token table size getting bigger

2014-09-22 Thread Clint Byrum
*** This bug is a duplicate of bug 1257723 ***
https://bugs.launchpad.net/bugs/1257723

David this is definitely a known issue in Keystone.

Note that one way to work around the extremely poor performance of
token_flush, is to use percona toolkit's 'pt-archiver' command:

https://git.openstack.org/cgit/openstack/tripleo-image-
elements/tree/elements/keystone/cleanup-keystone-tokens.sh

This deletes 500 at a time.

The long term vision is to use ephemeral tokens that don't have to live
in a database inside keystone:

https://blueprints.launchpad.net/keystone/+spec/non-persistent-tokens

I'm going to mark this bug as a duplicate of bug #1257723 because it is
just a symptom of the same problem.

** This bug has been marked a duplicate of bug 1257723
   Timed out trying to delete user resolved by heat-engine restart

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1371801

Title:
  Clouds with high usage gets slower because of token table size getting
  bigger

Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi guys,

  Clouds with high usage gets slower because of token table size
  getting bigger and bigger without any automated cleanup of any kind.
  The problem affects keystone but also nova-network (where the security
  group listing will get increasingly slower because there're no
  automated cleanup either).

  This request has been running for 394 seconds and everything is locked in the 
cloud when this happens:
  | 82659 | keystone | 192.168.2.121:56197 | keystone | Query   |  394 | 
Sending data | SELECT token.id AS token_id, token.expires AS token_expires, 
token.extra AS token_extra, token.valid AS token_valid, token.user_id AS 
token_user_id, token.trust_id AS token_trust_id
  FROM token
  WHERE token.valid = 1 AND token.expires  '2014-09-19 21:15:25' AND 
token.trust_id = '6297a1b419b04813a77c9d56c93f7eb4' |

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1371801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158684] Re: Pre-created ports get deleted on VM delete

2014-06-23 Thread Clint Byrum
This affects Heat, but it isn't a bug in Heat. Marking Invalid.

** Changed in: heat
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158684

Title:
  Pre-created ports get deleted on VM delete

Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  1) Pre create a port using port-create
  2) Boot a VM with nova boot --nic port_id=created port
  3) Delete a VM.

  Expected: VM should boot using provided port_id at boot time.
  When VM is deleted, port corresponding to pre-created port_id should not get 
deleted,
  as a lot of application, security settings could have port properties 
configured in them in a large network.

  Observed behavior:
  There is no way, I could prevent port_id associated with VM from being 
deleted with nova delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1158684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251448] Re: BadRequest: Multiple possible networks found, use a Network ID to be more specific.

2014-06-13 Thread Clint Byrum
I don't see this as a heat issue at all. Heat is deleting things and
accepting Neutron's answer that they are in fact deleted. Reassigning to
Neutron.

** Project changed: heat = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251448

Title:
  BadRequest: Multiple possible networks found, use a Network ID to be
  more specific.

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Won't Fix

Bug description:
  Gate (only neutron based) is peridocally failing with the following
  error:

  BadRequest: Multiple possible networks found, use a Network ID to be
  more specific. 

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiIHBvc3NpYmxlIG5ldHdvcmtzIGZvdW5kLCB1c2UgYSBOZXR3b3JrIElEIHRvIGJlIG1vcmUgc3BlY2lmaWMuIChIVFRQIDQwMClcIiBBTkQgZmlsZW5hbWU6XCJjb25zb2xlLmh0bWxcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg0NDY2ODA0Mjg2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  query:

  message: possible networks found, use a Network ID to be more
  specific. (HTTP 400) AND filename:console.html

  Example: http://logs.openstack.org/75/54275/3/check/check-tempest-
  devstack-vm-neutron-pg/61a2974/console.html

  Failure breakdown by job:
   check-tempest-devstack-vm-neutron-pg 34%  
   check-tempest-devstack-vm-neutron24%  
   gate-tempest-devstack-vm-neutron 10%  
   gate-tempest-devstack-vm-neutron-pg  5%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314905] [NEW] nova-baremetal loses track of instance-node relationship on REBUILD error

2014-05-01 Thread Clint Byrum
Public bug reported:

If you have a nova-baremetal managed server, and you issue a nova
rebuild, it will reboot and write a new image onto the instance.

However, if that fails, nova-baremetal wipes away the reference to the
instance_uuid, which means you cannot fix the machine out of band.

This is particularly problematic if using the 'preserve_ephemeral'
capability of rebuild, as a failed rebuild may result in a loss sof the
ephemeral disk.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314905

Title:
  nova-baremetal loses track of instance-node relationship on REBUILD
  error

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you have a nova-baremetal managed server, and you issue a nova
  rebuild, it will reboot and write a new image onto the instance.

  However, if that fails, nova-baremetal wipes away the reference to the
  instance_uuid, which means you cannot fix the machine out of band.

  This is particularly problematic if using the 'preserve_ephemeral'
  capability of rebuild, as a failed rebuild may result in a loss sof
  the ephemeral disk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1314905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296202] [NEW] openvswitch is sometimes restarting or times out, and this causes neutron-openvswitch-agent to crash

2014-03-22 Thread Clint Byrum
Public bug reported:

This was observed recently durin an Ironic CI run:

2014-03-22 02:38:53.436 7916 ERROR neutron.agent.linux.ovsdb_monitor [-] Error 
received from ovsdb monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: 
receive failed (End of file)
2014-03-22 02:38:53.690 7916 ERROR neutron.agent.linux.ovs_lib [-] Unable to 
execute ['ovs-vsctl', '--timeout=10', 'list-ports', 'br-int']. Exception: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=10', 'list-ports', 'br-int']
Exit code: 1
Stdout: ''
Stderr: 
'2014-03-22T02:38:53Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: 
connection attempt failed (No such file or directory)\novs-vsctl: 
unix:/var/run/openvswitch/db.sock: database connection failed (No such file or 
directory)\n'
2014-03-22 02:38:53.693 7916 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing 
VIF ports
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1212, in rpc_loop
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent port_info = 
self.scan_ports(ports, updated_ports_copy)
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 815, in scan_ports
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent cur_ports = 
self.int_br.get_vif_port_set()
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py,
 line 311, in get_vif_port_set
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent port_names = 
self.get_port_name_list()
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py,
 line 267, in get_port_name_list
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent res = 
self.run_vsctl([list-ports, self.br_name], check_error=True)
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py,
 line 74, in run_vsctl
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ctxt.reraise = False
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/openstack/common/excutils.py,
 line 82, in __exit__
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent six.reraise(self.type_, 
self.value, self.tb)
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/ovs_lib.py,
 line 67, in run_vsctl
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent return 
utils.execute(full_args, root_helper=self.root_helper)
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py,
 line 76, in execute
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise RuntimeError(m)
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError: 
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', 'list-ports', 'br-int']
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 1
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
2014-03-22 02:38:53.693 7916 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: 
'2014-03-22T02:38:53Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: 
connection attempt failed (No such file or directory)\novs-vsctl: 
unix:/var/run/openvswitch/db.sock: database connection failed (No such file or 
directory)\n'

This should result in a retry/sleep loop, not death of neutron-
openvswitch-agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of 

[Yahoo-eng-team] [Bug 1290969] [NEW] Change to require listing all known stores in glance-api.conf breaks swift-backed glance on upgrade

2014-03-11 Thread Clint Byrum
Public bug reported:

This change:

https://review.openstack.org/#q,I82073352641d3eb2ab3d6e9a6b64afc99a30dcc7,n,z

Causes an upgrade failure if users are backed by swift, as it is not in
the list of defaults anymore.

This change should be reverted and depending on the list should be
logged as a deprecation warning for 1 cycle.

** Affects: glance
 Importance: Undecided
 Assignee: Clint Byrum (clint-fewbar)
 Status: In Progress

** Affects: tripleo
 Importance: Critical
 Assignee: Derek Higgins (derekh)
 Status: In Progress

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) = Clint Byrum (clint-fewbar)

** Changed in: tripleo
 Assignee: (unassigned) = Derek Higgins (derekh)

** Changed in: tripleo
   Status: New = Triaged

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: glance
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1290969

Title:
  Change to require listing all known stores in glance-api.conf breaks
  swift-backed glance on upgrade

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  This change:

  https://review.openstack.org/#q,I82073352641d3eb2ab3d6e9a6b64afc99a30dcc7,n,z

  Causes an upgrade failure if users are backed by swift, as it is not
  in the list of defaults anymore.

  This change should be reverted and depending on the list should be
  logged as a deprecation warning for 1 cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1290969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290625] [NEW] keystone.contrib.revoke.backends.sql contains several glaring performance problems

2014-03-10 Thread Clint Byrum
Public bug reported:

* The id column is internal only, and yet, is varchar(64). This should
just be an auto incremented int.

* There are no indexes on anything.

* The comments claim that only DB2 has trouble with large deletes. This
is false. MySQL will hold gaps open on any indexes, including the
primary key, while deleting. This will effectively serialize access to
giant chunks of the table. If the ID were an auto-inc int, then this is
a non-issue because all writes will either be deletes or incrementing
the integer and thus not fall into the gaps. But then if we add indexes
where they should be, such as revoked_at, then that index will also have
gap locks. note that this is already acknowledged as this bug in tokens:
[ https://bugs.launchpad.net/keystone/+bug/1188378 ], which I believe
was just cargo-culted into this module.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1290625

Title:
  keystone.contrib.revoke.backends.sql contains several glaring
  performance problems

Status in OpenStack Identity (Keystone):
  New

Bug description:
  * The id column is internal only, and yet, is varchar(64). This should
  just be an auto incremented int.

  * There are no indexes on anything.

  * The comments claim that only DB2 has trouble with large deletes.
  This is false. MySQL will hold gaps open on any indexes, including the
  primary key, while deleting. This will effectively serialize access to
  giant chunks of the table. If the ID were an auto-inc int, then this
  is a non-issue because all writes will either be deletes or
  incrementing the integer and thus not fall into the gaps. But then if
  we add indexes where they should be, such as revoked_at, then that
  index will also have gap locks. note that this is already acknowledged
  as this bug in tokens: [
  https://bugs.launchpad.net/keystone/+bug/1188378 ], which I believe
  was just cargo-culted into this module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1290625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289036] [NEW] instance will not delete when failing to bind vif's

2014-03-06 Thread Clint Byrum
Public bug reported:

This instance will not go away despite multiple 'nova delete's

+--++
| Property | Value  


|
+--++
| OS-DCF:diskConfig| MANUAL 


|
| OS-EXT-AZ:availability_zone  | nova   


|
| OS-EXT-SRV-ATTR:host | ci-overcloud-novacompute7-mosbehy6ikhz 


|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | 
ci-overcloud-novacompute7-mosbehy6ikhz.novalocal

   |
| OS-EXT-SRV-ATTR:instance_name| instance-3ff9  


|
| OS-EXT-STS:power_state   | 0  


|
| OS-EXT-STS:task_state| deleting   


|
| OS-EXT-STS:vm_state  | error  


|
| OS-SRV-USG:launched_at   | -  


|
| OS-SRV-USG:terminated_at | -  


|
| accessIPv4   |


|
| accessIPv6   |


|
| config_drive |


|
| created  | 2014-03-06T20:57:10Z   


|
| fault| {message: Unexpected 
vif_type=binding_failed, code: 500, details:   File 
\/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py\,
 line 253, in decorated_function |
|  | return function(self, context, 
*args, **kwargs)

|
|  |   File 
\/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py\,
 line 2038, in terminate_instance   
|
|   

[Yahoo-eng-team] [Bug 1288392] [NEW] instances get stuck in ERROR/deleting when Neutron is unavailable

2014-03-05 Thread Clint Byrum
Public bug reported:

We had a temporary outage of Neutron, and many instances got stuck in
this state. 'nova delete' on them does not work until nova-compute is
forcibly restarted.

+--++
| Property | Value  


|
+--++
| OS-DCF:diskConfig| MANUAL 


|
| OS-EXT-AZ:availability_zone  | nova   


|
| OS-EXT-SRV-ATTR:host | ci-overcloud-novacompute1-4q2dbhdklrkq 


|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | 
ci-overcloud-novacompute1-4q2dbhdklrkq.novalocal

   |
| OS-EXT-SRV-ATTR:instance_name| instance-3f80  


|
| OS-EXT-STS:power_state   | 1  


|
| OS-EXT-STS:task_state| deleting   


|
| OS-EXT-STS:vm_state  | error  


|
| OS-SRV-USG:launched_at   | 2014-03-05T03:54:49.00 


|
| OS-SRV-USG:terminated_at | -  


|
| accessIPv4   |


|
| accessIPv6   |


|
| config_drive |


|
| created  | 2014-03-05T03:46:25Z   


|
| default-net network  | 10.0.58.225


|
| fault| {message: 

[Yahoo-eng-team] [Bug 1277298] [NEW] Deleting users takes a long time if there are many tokens

2014-02-06 Thread Clint Byrum
Public bug reported:

This is because we still have to filter results of an indexed query that
may return _millions_ of rows:

mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 AND 
token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: token
 type: ref
possible_keys: ix_token_expires,ix_token_valid,ix_token_expires_valid
  key: ix_token_valid
  key_len: 1
  ref: const
 rows: 697205
Extra: Using where
1 row in set (0.01 sec)


Adding an index on user_id makes this quite a bit faster:

mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 AND 
token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: token
 type: ref
possible_keys: ix_token_expires,ix_token_valid,ix_token_expires_valid,ix_user_id
  key: ix_user_id
  key_len: 195
  ref: const
 rows: 89
Extra: Using where
1 row in set (0.00 sec)

Note that memory usage still will go very high if a user one is deleting
has a lot of tokens, because the orm will select all of the rows, when
all that it needs is the id. So there are really two bugs. But the
select is slow even if you just select the id.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277298

Title:
  Deleting users takes a long time if there are many tokens

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is because we still have to filter results of an indexed query
  that may return _millions_ of rows:

  mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 
AND token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
  *** 1. row ***
 id: 1
select_type: SIMPLE
  table: token
   type: ref
  possible_keys: ix_token_expires,ix_token_valid,ix_token_expires_valid
key: ix_token_valid
key_len: 1
ref: const
   rows: 697205
  Extra: Using where
  1 row in set (0.01 sec)

  
  Adding an index on user_id makes this quite a bit faster:

  mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 
AND token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
  *** 1. row ***
 id: 1
select_type: SIMPLE
  table: token
   type: ref
  possible_keys: 
ix_token_expires,ix_token_valid,ix_token_expires_valid,ix_user_id
key: ix_user_id
key_len: 195
ref: const
   rows: 89
  Extra: Using where
  1 row in set (0.00 sec)

  Note that memory usage still will go very high if a user one is
  deleting has a lot of tokens, because the orm will select all of the
  rows, when all that it needs is the id. So there are really two bugs.
  But the select is slow even if you just select the id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1277298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272600] [NEW] Deleting an instance that is in the REBUILD/rebuild_spawning state results in an undeletable instance

2014-01-24 Thread Clint Byrum
Public bug reported:

To reproduce:

1. boot an instance
2. rebuild instance with a new image id
3. immediately restart nova-compute

-- instance is now stuck in REBUILD/rebuild_spawning state.

4. nova delete instance

-- instanec now says it is ACTIVE, but it cannot be deleted.

This traceback appears in logs:

2014-01-25 01:35:22,096.096 27134 ERROR nova.openstack.common.rpc.amqp 
[req-9f5bbb1e-4cab-4f60-9f28-57366b519533 2de96ce5da994575a08447c93bcafd53 
4956c533154c476799c688eda7ed65ab] Exception during message handling
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
Traceback (most recent call last):
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py,
 line 461, in _process_data
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
**args)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/rpc/dispatcher.py,
 line 172, in dispatch
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py, 
line 90, in wrapped
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
payload)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 68, in __exit__
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py, 
line 73, in wrapped
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return f(self, context, *args, **kw)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 244, in decorated_function
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp pass
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 68, in __exit__
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 230, in decorated_function
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return function(self, context, *args, **kwargs)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 295, in decorated_function
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 272, in decorated_function
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 68, in __exit__
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 259, in decorated_function
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return function(self, context, *args, **kwargs)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 2024, in terminate_instance
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
do_terminate_instance(instance, bdms)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py,
 line 249, in inner
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp 
return f(*args, **kwargs)
2014-01-25 01:35:22,096.096 27134 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,

[Yahoo-eng-team] [Bug 1266513] Re: Some Python requirements are not hosted on PyPI

2014-01-06 Thread Clint Byrum
** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New = Confirmed

** Changed in: tripleo
 Assignee: (unassigned) = Clint Byrum (clint-fewbar)

** Changed in: tripleo
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266513

Title:
  Some Python requirements are not hosted on PyPI

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Core Infrastructure:
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in OpenStack Object Storage (Swift):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Pip 1.5 (released January 2nd, 2014) will by default refuse to
  download packages which are linked from PyPI but not hosted on
  pypi.python.org. The workaround is to whitelist these package names
  individually with both the --allow-external and --allow-insecure
  options.

  These options are new in pip 1.4, so encoding them will break for
  people trying to use pip 1.3.x or earlier. Those earlier versions of
  pip are not secure anyway since they don't connect via HTTPS with host
  certificate validation, so we should be encouraging people to use 1.4
  and later anyway.

  The --allow-insecure option is transitioning to a clearer --allow-
  unverified option name starting with 1.5, but the new form does not
  work with pip before 1.5 so we should use the old version for now to
  allow people to transition gracefully. The --allow-insecure form won't
  be removed until at least pip 1.7 according to comments in the source
  code.

  Virtualenv 1.11 (released the same day) bundles pip 1.5 by default,
  and so requires these workarounds when using requirements external to
  PyPI. Be aware that 1.11 is broken for projects using
  sitepackages=True in their tox.ini. The fix is
  https://github.com/pypa/virtualenv/commit/a6ca6f4 which is slated to
  appear in 1.11.1 (no ETA available). We've worked around it on our
  test infrastructure with https://git.openstack.org/cgit/openstack-
  infra/config/commit/?id=20cd18a for now, but that is hiding the
  external-packages issue since we're currently running all tests with
  pip 1.4.1 as a result.

  This bug will also be invisible in our test infrastructure for
  projects listed as having the PyPI mirror enforced in
  openstack/requirements (except for jobs which bypass the mirror, such
  as those for requirements changes), since our update jobs will pull in
  and mirror external packages and pip sees the mirror as being PyPI
  itself in that situation.

  We'll use this bug to track necessary whitelist updates to tox.ini and
  test scripts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266262] Re: creating a large ext3 ephemeral disk filesystem is much slower than ext4

2014-01-05 Thread Clint Byrum
** Also affects: nova
   Importance: Undecided
   Status: New

** Description changed:

  ext3 is more generally available than ext4, but creating an ext3
  filesystem involves a lot more writing than ext3. The only reason to
  retain ext3 is to retain backward compatibility with very old guest OS's
  or OS's which have chosen not to support ext4 such as SLES.
  
  A plan should be made to deprecate ext3 support in Nova at some point.
  For TripleO we are not supporting deploying OpenStack itself on older
  OS's like this, thus we should change this setting in our baremetal
  cloud(s) to be ext4.
+ 
+ In nova, we should officially deprecate ext3 support in Icehouse's
+ release notes, and change the default to ext4 in Juno.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266262

Title:
  creating a large ext3 ephemeral disk filesystem is much slower than
  ext4

Status in OpenStack Compute (Nova):
  New
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  ext3 is more generally available than ext4, but creating an ext3
  filesystem involves a lot more writing than ext3. The only reason to
  retain ext3 is to retain backward compatibility with very old guest
  OS's or OS's which have chosen not to support ext4 such as SLES.

  A plan should be made to deprecate ext3 support in Nova at some point.
  For TripleO we are not supporting deploying OpenStack itself on older
  OS's like this, thus we should change this setting in our baremetal
  cloud(s) to be ext4.

  In nova, we should officially deprecate ext3 support in Icehouse's
  release notes, and change the default to ext4 in Juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257723] Re: Timed out trying to delete user resolved by heat-engine restart

2013-12-31 Thread Clint Byrum
This seems _entirely_ keystone's issue. And I agree that not doing many
thousands of crud events is the right way to go. Closing the Heat task,
and I suggest keystone mark it as a dupe or attach it to the revocation-
events blueprint.

** Changed in: heat
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257723

Title:
  Timed out trying to delete user resolved by heat-engine restart

Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Identity (Keystone):
  New

Bug description:
  I left a TripleO overcloud stack running for a couple of days and when
  I went to delete it, it went into DELETE_FAILED

   | notcomputeConfig| 29 | state changed  | 
DELETE_COMPLETE| 2013-12-04T12:11:25Z |
   | CompletionCondition | 30 | state changed  | 
DELETE_COMPLETE| 2013-12-04T12:11:25Z |
   | notcompute  | 31 | state changed  | 
DELETE_IN_PROGRESS | 2013-12-04T12:11:25Z |
   | CompletionHandle| 32 | state changed  | 
DELETE_IN_PROGRESS | 2013-12-04T12:11:32Z |
   | CompletionHandle| 33 | Error: Timed out trying to delete user | 
DELETE_FAILED  | 2013-12-04T12:11:42Z |
   | notcompute  | 34 | Deletion aborted   | 
DELETE_FAILED  | 2013-12-04T12:11:42Z |

  heat-engine log shows:

  Traceback (most recent call last):
File 
/opt/stack/venvs/heat/lib/python2.7/site-packages/heat/engine/resource.py, 
line 575, in delete
  handle_data = self.handle_delete()
File 
/opt/stack/venvs/heat/lib/python2.7/site-packages/heat/engine/signal_responder.py,
 line 62, in handle_delete
  self.keystone().delete_stack_user(self.resource_id)
File 
/opt/stack/venvs/heat/lib/python2.7/site-packages/heat/common/heat_keystoneclient.py,
 line 302, in delete_stack_user
  raise exception.Error(reason)
  Error: Timed out trying to delete user

  Retrying stack-delete didn't help, until I restarted heat-engine and
  it quickly completed

  I see we cache the keystone client instance with the stack - any
  obvious reason for the client getting foobared in a way that re-
  creating it would fix it?

  versions:
heat - daddc05
keystone - f72f369
keystoneclient - 7abf8d2

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1257723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263294] Re: ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'

2013-12-21 Thread Clint Byrum
This is blocking ephemeral disk usage on hardware which is critical to
our image based updates via rebuild code path.

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New = Triaged

** Changed in: tripleo
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1263294

Title:
  ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in
  /sys/block'

Status in Init scripts for use on cloud images:
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This is due to line 227 of ./cloudinit/config/cc_mounts.py::

  short_name = os.path.basename(device)
  sys_path = /sys/block/%s % short_name

  if not os.path.exists(sys_path):
  LOG.debug(did not find entry for %s in /sys/block, short_name)
  return None

  The sys path for /dev/sda1 is /sys/block/sda/sda1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1263294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261253] Re: oslo.messaging 1.2.0a11 is outdated and problematic to install

2013-12-16 Thread Clint Byrum
Adding affects tripleo. The fix there is to manually add d2to1 to the
local pypi-mirror element in tripleo-image-elements. If this is fixed in
glance then the tripleo task can be closed. The CD undercloud has had
d2to1 manually added to the local pypi-mirror for now to unblock image
building.

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New = Fix Released

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: tripleo
   Status: Fix Released = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1261253

Title:
  oslo.messaging 1.2.0a11 is outdated and problematic to install

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  oslo.messaging needs to be updated to the latest alpha of 1.3.0 (a3 as
  of bug filing) or if this bug is still open an oslo.messaging is
  released, to the released version.

  1.2.0a11 directly depends on d2to1, which is no longer in global
  requirements and thus presents problems for deployers using that as a
  gate for pypi mirrors (notably: TripleO).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1261253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261253] [NEW] oslo.messaging 1.2.0a11 is outdated and problematic to install

2013-12-15 Thread Clint Byrum
Public bug reported:

oslo.messaging needs to be updated to the latest alpha of 1.3.0 (a3 as
of bug filing) or if this bug is still open an oslo.messaging is
released, to the released version.

1.2.0a11 directly depends on d2to1, which is no longer in global
requirements and thus presents problems for deployers using that as a
gate for pypi mirrors (notably: TripleO).

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1261253

Title:
  oslo.messaging 1.2.0a11 is outdated and problematic to install

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  oslo.messaging needs to be updated to the latest alpha of 1.3.0 (a3 as
  of bug filing) or if this bug is still open an oslo.messaging is
  released, to the released version.

  1.2.0a11 directly depends on d2to1, which is no longer in global
  requirements and thus presents problems for deployers using that as a
  gate for pypi mirrors (notably: TripleO).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1261253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260440] [NEW] nova-compute host is added to scheduling pool before Neutron can bind network ports on said host

2013-12-12 Thread Clint Byrum
Public bug reported:

This is a race condition.

Given a cloud with 0 compute nodes available, on a compute node:
* Start up neutron-openvswitch-agent
* Start up nova-compute
* nova boot an instance

Scenario 1:
* neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
* port is bound to agent
* instance boots with correct networking

Scenario 2:
* nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
* nova instance fails with vif_type=binding_failed
* instance is in ERROR state

I would expect that Nova would not try to schedule instances on compute
hosts that are not ready.

Please also see this mailing list thread for more info:

http://lists.openstack.org/pipermail/openstack-
dev/2013-December/022084.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260440

Title:
  nova-compute host is added to scheduling pool before Neutron can bind
  network ports on said host

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is a race condition.

  Given a cloud with 0 compute nodes available, on a compute node:
  * Start up neutron-openvswitch-agent
  * Start up nova-compute
  * nova boot an instance

  Scenario 1:
  * neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
  * port is bound to agent
  * instance boots with correct networking

  Scenario 2:
  * nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
  * nova instance fails with vif_type=binding_failed
  * instance is in ERROR state

  I would expect that Nova would not try to schedule instances on
  compute hosts that are not ready.

  Please also see this mailing list thread for more info:

  http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022084.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260440] Re: nova-compute host is added to scheduling pool before Neutron can bind network ports on said host

2013-12-12 Thread Clint Byrum
This breaks deployment of new clouds in TripleO sometimes, and will
likely break scaling too. Hence the Critical status.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New = Triaged

** Changed in: tripleo
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260440

Title:
  nova-compute host is added to scheduling pool before Neutron can bind
  network ports on said host

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This is a race condition.

  Given a cloud with 0 compute nodes available, on a compute node:
  * Start up neutron-openvswitch-agent
  * Start up nova-compute
  * nova boot an instance

  Scenario 1:
  * neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
  * port is bound to agent
  * instance boots with correct networking

  Scenario 2:
  * nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
  * nova instance fails with vif_type=binding_failed
  * instance is in ERROR state

  I would expect that Nova would not try to schedule instances on
  compute hosts that are not ready.

  Please also see this mailing list thread for more info:

  http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022084.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260134] [NEW] in tests, many tests pass arguments to assertEqual backwards as (observed, expected) instead of (expected, observed)

2013-12-11 Thread Clint Byrum
Public bug reported:

This is a minor bug but it will produce a very confusing error message
if the test ever fails. Examples include but are not limited to
nova.tests.api.openstack.compute.test_servers.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260134

Title:
  in tests, many tests pass arguments to assertEqual backwards as
  (observed,expected) instead of (expected, observed)

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is a minor bug but it will produce a very confusing error message
  if the test ever fails. Examples include but are not limited to
  nova.tests.api.openstack.compute.test_servers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244255] Re: binding_failed because of l2 agent assumed down

2013-12-10 Thread Clint Byrum
I've added nova as I'm having a hard time reading through nova and
figuring out where it signals to the scheduler that it is available to
have things scheduled to it. That should probably be _after_ it has an
l2 agent. _OR_ the scheduler needs to become neutron-aware so that the
scheduler can retry.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244255

Title:
  binding_failed because of l2 agent assumed down

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest test ServerAddressesTestXML failed on a change that does not
  involve any code modification.

  https://review.openstack.org/53633

  2013-10-24 14:04:29.188 | 
==
  2013-10-24 14:04:29.189 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | 
--
  2013-10-24 14:04:29.189 | _StringException: Traceback (most recent call last):
  2013-10-24 14:04:29.189 |   File 
tempest/api/compute/servers/test_server_addresses.py, line 31, in setUpClass
  2013-10-24 14:04:29.189 | resp, cls.server = 
cls.create_server(wait_until='ACTIVE')
  2013-10-24 14:04:29.189 |   File tempest/api/compute/base.py, line 143, in 
create_server
  2013-10-24 14:04:29.190 | server['id'], kwargs['wait_until'])
  2013-10-24 14:04:29.190 |   File 
tempest/services/compute/xml/servers_client.py, line 356, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | return waiters.wait_for_server_status(self, 
server_id, status)
  2013-10-24 14:04:29.190 |   File tempest/common/waiters.py, line 71, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-10-24 14:04:29.190 | BuildErrorException: Server 
e21d695e-4f15-4215-bc62-8ea645645a26 failed to build and is in ERROR status


  From n-cpu.log (http://logs.openstack.org/33/53633/1/check/check-
  tempest-devstack-vm-
  neutron/4dd98e5/logs/screen-n-cpu.txt.gz#_2013-10-24_13_58_07_532):

   Error: Unexpected vif_type=binding_failed
   Traceback (most recent call last):
   set_access_ip=set_access_ip)
 File /opt/stack/new/nova/nova/compute/manager.py, line 1413, in _spawn
   LOG.exception(_('Instance failed to spawn'), instance=instance)
 File /opt/stack/new/nova/nova/compute/manager.py, line 1410, in _spawn
   block_device_info)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2084, in spawn
   write_to_disk=True)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3064, in 
to_xml
   disk_info, rescue, block_device_info)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2951, in 
get_guest_config
   inst_type)
 File /opt/stack/new/nova/nova/virt/libvirt/vif.py, line 380, in 
get_config
   _(Unexpected vif_type=%s) % vif_type)
   NovaException: Unexpected vif_type=binding_failed
   TRACE nova.compute.manager [instance: e21d695e-4f15-4215-bc62-8ea645645a26]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257524] [NEW] If neutron spawned dnsmasq dies, neutron-dhcp-agent will be totally unaware

2013-12-03 Thread Clint Byrum
Public bug reported:

I recently had some trouble with dnsmasq causing it to segfault in
certain situations. No doubt, this was a bug in dnsmasq. However, it was
quite troubling that Neutron never noted that dnsmasq had stopped
working. This is because dnsmasq is spawned as a daemon, even though it
is most definitely owned by neutron-dhcp-agent. Also if neutron-dhcp-
agent should die, since dnsmasq is a daemon it will continue to run and
be stale, requiring manual intervention to clean up. However if it is
in the foreground then it will stay in neutron-dhcp-agent's process
group and should also die and if need-be cleaned up by init.

I did some analysis and will not be able to dig into the actual
implementation. However my analysis shows that this would work:

* use utils.create_process instead of execute and remember returned Popen 
object.
* spawn a greenthread to wait() on the process
* if it dies, restart it and log the error code
* pass the -k option so dnsmasq stays in foreground
* kill the process using child signals

Note sure how or if SIGCHLD plays a factor.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257524

Title:
  If neutron spawned dnsmasq dies, neutron-dhcp-agent will be totally
  unaware

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I recently had some trouble with dnsmasq causing it to segfault in
  certain situations. No doubt, this was a bug in dnsmasq. However, it
  was quite troubling that Neutron never noted that dnsmasq had stopped
  working. This is because dnsmasq is spawned as a daemon, even though
  it is most definitely owned by neutron-dhcp-agent. Also if neutron-
  dhcp-agent should die, since dnsmasq is a daemon it will continue to
  run and be stale, requiring manual intervention to clean up. However
  if it is in the foreground then it will stay in neutron-dhcp-agent's
  process group and should also die and if need-be cleaned up by init.

  I did some analysis and will not be able to dig into the actual
  implementation. However my analysis shows that this would work:

  * use utils.create_process instead of execute and remember returned Popen 
object.
  * spawn a greenthread to wait() on the process
  * if it dies, restart it and log the error code
  * pass the -k option so dnsmasq stays in foreground
  * kill the process using child signals

  Note sure how or if SIGCHLD plays a factor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221620] Re: KeyError: 'service' during schedule of baremetal instance

2013-09-19 Thread Clint Byrum
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1221620

Title:
  KeyError: 'service' during schedule of baremetal instance

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  Traceback while scheduling both overcloud nodes on tripleo ci

  Last succesfull run was 05-Sep-2013 01:54:10 (UTC)
  So something changed after this run https://review.openstack.org/#/c/43968/

  although scheduling of baremetal node seems to work on seed 

  INFO nova.scheduler.filter_scheduler 
[req-c754d309-92fc-461a-81fb-d5bfe97a0676 99fa1214e35a4cc6b99c9332b8ca66fb 
d86556c4d57c4dfc87b30f6c66c40a98] Attempting to build 1 instance(s) uuids: 
[u'f71e3e47-f2a2-4a13-9
  WARNING nova.scheduler.utils [req-c754d309-92fc-461a-81fb-d5bfe97a0676 
99fa1214e35a4cc6b99c9332b8ca66fb d86556c4d57c4dfc87b30f6c66c40a98] Failed to 
scheduler_run_instance: 'service'
  WARNING nova.scheduler.utils [req-c754d309-92fc-461a-81fb-d5bfe97a0676 
99fa1214e35a4cc6b99c9332b8ca66fb d86556c4d57c4dfc87b30f6c66c40a98] [instance: 
f71e3e47-f2a2-4a13-92c0-c3397acaf409] Setting instance to ERR
  ERROR nova.openstack.common.rpc.amqp 
[req-c754d309-92fc-461a-81fb-d5bfe97a0676 99fa1214e35a4cc6b99c9332b8ca66fb 
d86556c4d57c4dfc87b30f6c66c40a98] Exception during message handling
  TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py,
 line 461, in _process_data
  TRACE nova.openstack.common.rpc.amqp **args)
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/rpc/dispatcher.py,
 line 172, in dispatch
  TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, 
method)(ctxt, **kwargs)
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/manager.py, 
line 160, in run_instance
  TRACE nova.openstack.common.rpc.amqp context, ex, request_spec)
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/manager.py, 
line 147, in run_instance
  TRACE nova.openstack.common.rpc.amqp legacy_bdm_in_spec)
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py,
 line 87, in schedule_run_instance
  TRACE nova.openstack.common.rpc.amqp filter_properties, instance_uuids)
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py,
 line 326, in _schedule
  TRACE nova.openstack.common.rpc.amqp hosts = 
self.host_manager.get_all_host_states(elevated)
  TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/scheduler/host_manager.py,
 line 432, in get_all_host_states
  TRACE nova.openstack.common.rpc.amqp service = compute['service']
  TRACE nova.openstack.common.rpc.amqp KeyError: 'service'
  TRACE nova.openstack.common.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1221620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184445] Re: deploy / delete fragility

2013-09-19 Thread Clint Byrum
** Changed in: nova
   Status: Triaged = Invalid

** Changed in: tripleo
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1184445

Title:
  deploy / delete fragility

Status in OpenStack Compute (Nova):
  Invalid
Status in tripleo - openstack on openstack:
  Invalid

Bug description:
  We have 30 machines we're testing on. Sometimes they fail to deploy, or fail 
to be deleted. Othertimes they report as failed, but actually did power off / 
did deploy correctly.
  So we have actual fails and false reports of failures.

  Data gathered so far:
  machine  fault instance uuidnotes
  freecloud33  'active deleting' f7862b82-268d-4971-b961-a8fe51488b21
  freecloud35  'active deleting' d3d7d58f-408c-47ff-993a-4b8327f27541
  freecloud32  'build spawning none' d01059f8-97ab-4f0a-968b-7411b2ab717c
  freecloud12  'active deleting'/hung iLO 28ba32b4-04d1-4aa7-9f7e-283401c5d2a5  
  had wrong instance (compute2) - perhaps a prior scheduler retry or something?
  freecloud22  'build spawning nostate', stuck in deploy ramdisk 
ed634e44-5fa3-43a6-baf5-7d5d15ec7cff
  freecloud18 'build spawning nostate' 64a96d33-bf89-44bd-817f-c62053c1eb91 
power reset - went active
  freecloud31 'active deleting running' bb35ffdf-9fae-4e23-8e46-ec76b89c1ce4 - 
was still running with it's IP.
  freecloud25 'build spawning nostate'/stuck on 'Boot failed: press a key to 
retry, or wait for reset...' 6d9a3e09-be55-4f7e-8da1-843e85c687df power reset 
- went active
  freecloud38 'active deleting running' 091264f9-830b-4279-92e3-20ff56375973
  was active on the IP the instance had
  freecloud36 'active deleting running' 30405362-c307-428a-94c5-dbe6284b8f28 is 
powered off
  freecloud34 'active deleting running' 3f0cdb8f-70ae-43f7-bb98-83c48f5da317 is 
powered off
  freecloud37 'active deleting running' 54fb06f0-325c-4d98-9a54-2ab4d3ab9794 is 
stuck in graphics mode - prob deployramdisk
  freecloud26 'build spawning nostate' 8b80ff43-ba81-44c6-a22a-fffd6034579a 
stuck on 'Attempting Boot From Hard Drive (C:)' [after boot-from-nic]. power 
reset brought it up, but nova still thinks it's build spawning nostate
  freecloud30 'active deleting running' cd715548-afd7-4342-8c74-b4d5e5984dd6 
stuck in deploy ramdisk
  ---nova delete---
  cleared all but | ed633e44-5fa3-43a6-baf5-7d5d15ec7cff | 
compute-test2.NovaCompute0.NovaCompute| BUILD  | deleting   | NOSTATE | 
  |
  powered ed633 on, and the delete was processed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1184445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1205546] Re: babel 1.0 dependency pytz isn't found

2013-07-27 Thread Clint Byrum
** Changed in: diskimage-builder
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1205546

Title:
  babel 1.0 dependency pytz isn't found

Status in Ceilometer:
  Invalid
Status in Cinder:
  In Progress
Status in Openstack disk image builder:
  Fix Released
Status in Orchestration API (Heat):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Core Infrastructure:
  Fix Released
Status in Python client library for Keystone:
  In Progress

Bug description:
  Any project that uses babel is unable to run, because it cannot find
  pytz.

  when trying this locally I had no problems building my environment,
  but this isn't working on openstack-infra

  $ $ tox -r -epep8 
  $ ./.tox/pep8/bin/pip freeze | grep -i  babel
  Babel==1.0
  $ ./.tox/pep8/bin/pip freeze | grep -i  pytz
  pytz==2013b



  http://logs.openstack.org/92/38892/1/check/gate-nova-
  python27/30215/console.html

  
  2013-07-26 23:59:41.605 | Downloading/unpacking pytz (from Babel=0.9.6--r 
/home/jenkins/workspace/gate-nova-python27/requirements.txt (line 22))
  2013-07-26 23:59:41.605 |   Could not find a version that satisfies the 
requirement pytz (from Babel=0.9.6--r 
/home/jenkins/workspace/gate-nova-python27/requirements.txt (line 22)) (from 
versions: 2012c, 2012c, 2012c, 2012d, 2012d, 2012d, 2012f, 2012f, 2012f, 2012g, 
2012g, 2012g, 2012h, 2012h, 2012j, 2012j, 2012j, 2013b, 2013b, 2013b)
  2013-07-26 23:59:41.605 | Cleaning up...
  2013-07-26 23:59:41.605 | No distributions matching the version for pytz 
(from Babel=0.9.6--r 
/home/jenkins/workspace/gate-nova-python27/requirements.txt (line 22))
  2013-07-26 23:59:41.605 | Storing complete log in /home/jenkins/.pip/pip.log
  2013-07-26 23:59:41.605 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1205546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1100920] Re: In Ubuntu 12.10, the legacy 'user' cloud-config option is not handled properly

2013-01-18 Thread Clint Byrum
I'm closing the heat task. Heat is doing the right thing here, I think
cloud-init was just doing the wrong one.

** Changed in: cloud-init
   Status: New = In Progress

** Changed in: heat
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1100920

Title:
  In Ubuntu 12.10, the legacy 'user' cloud-config option is not handled
  properly

Status in Init scripts for use on cloud images:
  In Progress
Status in Orchestration API (Heat):
  Invalid

Bug description:
  When trying to use HEAT with Ubuntu 12.10, the following cloud-config
  is used:

  _BEGIN_
  runcmd:
   - setenforce 0  /dev/null 21 || true

  user: ec2-user

  cloud_config_modules:
   - locale
   - set_hostname
   - ssh
   - timezone
   - update_etc_hosts
   - update_hostname
   - runcmd

  # Capture all subprocess output into a logfile
  # Useful for troubleshooting cloud-init issues
  output: {all: '| tee -a /var/log/cloud-init-output.log'}
  _END_---
  This results in ec2-user being created, but no SSH keys for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1100920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp