[Yahoo-eng-team] [Bug 1762387] [NEW] v1 Image Service APIs in Image Service API Reference

2018-04-09 Thread Sam Betts
Public bug reported:

The v1.X image service APIs were removed in the Rocky release, this
included the API reference docs. The API-ref docs don't provide stable
versions like the main docs do e.g. docs.openstack.org/newton/glance
which means there is no longer a published version of the v1 API
reference even though Rocky is not yet released. This is very
frustrating when trying to support older versions or even for reference
back when trying to resolve the v1 deprecation.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1762387

Title:
  v1 Image Service APIs in Image Service API Reference

Status in Glance:
  New

Bug description:
  The v1.X image service APIs were removed in the Rocky release, this
  included the API reference docs. The API-ref docs don't provide stable
  versions like the main docs do e.g. docs.openstack.org/newton/glance
  which means there is no longer a published version of the v1 API
  reference even though Rocky is not yet released. This is very
  frustrating when trying to support older versions or even for
  reference back when trying to resolve the v1 deprecation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1762387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484160] Re: Final decomposition of ML2 Cisco NCS driver

2017-10-05 Thread Sam Betts
** Changed in: networking-cisco
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484160

Title:
  Final decomposition of ML2 Cisco NCS driver

Status in networking-cisco:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Fully decompose the ncs driver from neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1484160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506948] Re: Release request of networking-cisco on stable/kilo: 1.0.0 and 1.1.0

2017-09-27 Thread Sam Betts
** Changed in: networking-cisco
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506948

Title:
  Release request of networking-cisco on stable/kilo: 1.0.0 and 1.1.0

Status in networking-cisco:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  
  In preparation for Liberty's semver changes, we want to re-release 2015.1.0 
as 1.0.0 (adding another tag at the same point: 
24edb9fd14584020a8b242a8b351befc5ddafb7e)

  
  New tag info:

  Branch:   stable/kilo
  From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
  New Tag:  1.1.0

  This release contains the following changes:

   d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
   f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: VLAN not 
created on switch
   d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
   0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubble up exceptions when 
Nexus replay enabled
   54fca8a047810304c69990dce03052e45f21cc23Quick retry connect to 
resolve stale ncclient handle
   0c496e1d7425984bf9686b11b5c0c9c8ece23bf3Update requirements.txt for 
ML2 Nexus
   393254fcfbe3165e4253801bc3be03e15201c36dUpdate requirements.txt
   75fd522b36f7b67dc4152e461f4e5dfa26b4ff31Remove duplicate entrypoints 
in setup.cfg
   178f40f2a43192687188661d5fcedf394321e191Cisco UCSM driver updates to 
handle duplicate creations
   11f5f29af3e5c4a2ed4b42471e32db49180693dfClean up of UCS Manager 
connection handle management.
   ad010718f978763e399f0bf9a0976ba51d3334ebFix Cisco CSR1KV script 
issues
   a8c4bd753ba254b062612c1bcd85000656ebfa44Replace retry count with 
replay failure stats
   db1bd250b95abfc267c8a75891ba56105cbeed8cAdd scripts to enable CSR 
FWaaS service
   f39c6a55613a274d6d0e67409533edefbca6f9a7Fix N1kv trunk driver: same 
mac assigned to ports created
   a118483327f7a217dfedfe69da3ef91f9ec6a169Update netorking-cisco files 
for due to neutrons port dictionary subnet being replaced with
   b60296644660303fb2341ca6495611621fc486e7ML2 cisco_nexus MD: Config 
hangs when replay enabled
   76f7be8758145c61e960ed37e5c93262252f56ffMove UCSM from extension 
drivers to mech drivers
   ffabc773febb9a8df7853588ae27a4fe3bc4069bML2 cisco_nexus MD: 
Multiprocess replay issue
   77d4a60fbce7f81275c3cdd9fec3b28a1ca0c57cML2 cisco_nexus MD: If 
configured, close ssh sessions
   825cf6d1239600917f8fa545cc3745517d363838Part II-Detect switch 
failure earlier-Port Create
   9b7b57097b2bd34f42ca5adce1e3342a91b4d3f8Retry count not reset on 
successful replay
   6afe5d8a6d11db4bc2db29e6a84dc709672b1d69ML2 Nexus decomposition not 
complete for Nexus
   ac84fcb861bd594a5a3773c32e06b3e58a729308Delete fails after switch 
reset (replay off)
   97720feb4ef4d75fa190a23ac10038d29582b001Call to get nexus type for 
Nexus 9372PX fails
   87fb3d6f75f9b0ae574df17b494421126a636199Detect switch failure 
earlier during port create
   b38e47a37977634df14846ba38aa38d7239a1adcEnable the CSR1kv devstack 
plugin for Kilo
   365cd0f94e579a4c885e6ea9c94f5df241fb2288Sanitize policy profile 
table on neutron restart
   4a6a4040a71096b31ca5c283fd0df15fb87aeb38Cisco Nexus1000V: Retry 
mechanism for VSM REST calls
   7bcec734cbc658f4cd0792c625aff1a3edc73208Moved N1kv section from 
neutron tree to stackforge
   4970a3e279995faf9aff402c96d4b16796a00ef5N1Kv: Force syncing BDs on 
neutron restart
   f078a701931986a2755d340d5f4a7cc2ab095bb3s/stackforge/openstack/g
   151f6f6836491b77e0e788089e0cf9edbe9b7e00Update .gitreview file for 
project rename
   876c25fbf7e3aa7f8a44dd88560a030e609648d5Bump minor version number to 
enable development
   a5e7f6a3f0f824ec313449273cf9b283cf1fd3b9Sync notification to VSM & 
major sync refactoring

  NOTE: this is a kilo release, so i'm not sure if we should follow the
  post versioning step in from:
  http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
  #sub-project-release-process

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1506948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525464] Re: Release request of networking-cisco and creation of stable/liberty 2.0.0

2017-09-27 Thread Sam Betts
** Changed in: networking-cisco
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525464

Title:
  Release request of networking-cisco and creation of stable/liberty
  2.0.0

Status in networking-cisco:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  networking-cisco has NOT yet been branched for stable/liberty this
  needs to be done alone with tagging the first release at 2.0.0

  Branch and tag at hash: cccaa1d12b81e17756dbef216048d76d008df54d

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1525464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708424] [NEW] When a flavor has resource extra_specs disabling all standard fields, nova tries to make a request to the placements API with no resources

2017-08-03 Thread Sam Betts
Public bug reported:

When a flavor is defined like:

+++
| Field  | Value
  |
+++
| OS-FLV-DISABLED:disabled   | False
  |
| OS-FLV-EXT-DATA:ephemeral  | 0
  |
| access_project_ids | None 
  |
| disk   | 0
  |
| id | 2f41a9c0-f194-4925-81d2-b62a3c07435a 
  |
| name   | test2
  |
| os-flavor-access:is_public | True 
  |
| properties | resources:DISK_GB='0', resources:MEMORY_MB='0', 
resources:VCPU='0' |
| ram| 256  
  |
| rxtx_factor| 1.0  
  |
| swap   |  
  |
| vcpus  | 1
  |
+++

Then it will disable all the standard resources, and pop them from the
resources dictionary here:

https://github.com/openstack/nova/blob/57cd38d1fdc4d6b3439a8374e4713baf0b207184/nova/scheduler/utils.py#L141

the resources dictionary is then empty but nova still tries to use it to
make a request to the placements API. This results in a error message
like:

ERROR nova.scheduler.client.report [None 
req-030ec9a7-4299-47e0-bd38-b9d40721c210 admin admin] Failed to retrieve 
allocation candidates from placement API for filters {}. Got 400: 
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   400 Bad 
Request
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   400 Bad Request
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   The server could not 
comply with the request since it is either malformed or otherwise incorrect.
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]: Badly formed resources 
parameter. Expected resources query string parameter in form: 
?resources=VCPU:2,MEMORY_MB:1024. Got: empty string.
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]: .

Nova should catch the case when the flavor has removed all filters and
provide a more useful error message to indicate what the operator as
done wrong.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708424

Title:
  When a flavor has resource extra_specs disabling all standard fields,
  nova tries to make a request to the placements API with no resources

Status in OpenStack Compute (nova):
  New

Bug description:
  When a flavor is defined like:

  
+++
  | Field  | Value  
|
  
+++
  | OS-FLV-DISABLED:disabled   | False  
|
  | OS-FLV-EXT-DATA:ephemeral  | 0  
|
  | access_project_ids | None   
|
  | disk   | 0  
|
  | id | 2f41a9c0-f194-4925-81d2-b62a3c07435a   
|
  | name   | test2  
|
  | os-flavor-access:is_public | True   
|
  | properties | resources:DISK_GB='0', 
resources:MEMORY_MB='0', resources:VCPU='0' |
  | ram| 256
|
  | rxtx_factor| 1.0
|
  | swap   |

[Yahoo-eng-team] [Bug 1671815] [NEW] Can not use custom network interfaces with stable/newton Ironic

2017-03-10 Thread Sam Betts
Public bug reported:

When using network interfaces in Ironic, nova shouldn't bind the port
that it creates so that the ironic network interface can do it later in
the process. Nova should only bind the port for the flat network
interface for backwards compatibility with stable/mitaka. The logic for
this in the ironic virt driver is incorrect, and will bind the port for
any ironic network interface except the neutron one.

** Affects: ironic
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671815

Title:
  Can not use custom network interfaces with stable/newton Ironic

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When using network interfaces in Ironic, nova shouldn't bind the port
  that it creates so that the ironic network interface can do it later
  in the process. Nova should only bind the port for the flat network
  interface for backwards compatibility with stable/mitaka. The logic
  for this in the ironic virt driver is incorrect, and will bind the
  port for any ironic network interface except the neutron one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1671815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606229] Re: vif_port_id of ironic port is not updating after neutron port-delete

2017-01-23 Thread Sam Betts
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606229

Title:
  vif_port_id of ironic port is not updating after neutron port-delete

Status in Ironic:
  New
Status in neutron:
  In Progress

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is done from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1606229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629726] Re: recompiled pycparser 2.14 breaks Cinder db sync and Nova UTs

2016-10-03 Thread Sam Betts
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** Changed in: networking-cisco
   Importance: Undecided => Critical

** Changed in: networking-cisco
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629726

Title:
  recompiled pycparser 2.14 breaks Cinder db sync and Nova UTs

Status in Cinder:
  Confirmed
Status in Designate:
  Confirmed
Status in ec2-api:
  Confirmed
Status in gce-api:
  Confirmed
Status in heat:
  Confirmed
Status in Ironic:
  Confirmed
Status in Magnum:
  Confirmed
Status in Manila:
  Confirmed
Status in networking-cisco:
  Confirmed
Status in OpenStack Compute (nova):
  Confirmed
Status in Rally:
  Confirmed
Status in OpenStack DBaaS (Trove):
  Confirmed

Bug description:
  http://logs.openstack.org/76/380876/1/check/gate-grenade-dsvm-ubuntu-
  xenial/3d5e102/logs/grenade.sh.txt.gz#_2016-10-02_23_32_34_069

  2016-10-02 23:32:34.069 | + lib/cinder:init_cinder:421   :   
/usr/local/bin/cinder-manage --config-file /etc/cinder/cinder.conf db sync
  2016-10-02 23:32:34.691 | Traceback (most recent call last):
  2016-10-02 23:32:34.691 |   File "/usr/local/bin/cinder-manage", line 6, in 

  2016-10-02 23:32:34.691 | from cinder.cmd.manage import main
  2016-10-02 23:32:34.691 |   File 
"/opt/stack/old/cinder/cinder/cmd/manage.py", line 77, in 
  2016-10-02 23:32:34.691 | from cinder import db
  2016-10-02 23:32:34.691 |   File 
"/opt/stack/old/cinder/cinder/db/__init__.py", line 20, in 
  2016-10-02 23:32:34.691 | from cinder.db.api import *  # noqa
  2016-10-02 23:32:34.691 |   File "/opt/stack/old/cinder/cinder/db/api.py", 
line 43, in 
  2016-10-02 23:32:34.691 | from cinder.api import common
  2016-10-02 23:32:34.691 |   File 
"/opt/stack/old/cinder/cinder/api/common.py", line 30, in 
  2016-10-02 23:32:34.691 | from cinder import utils
  2016-10-02 23:32:34.691 |   File "/opt/stack/old/cinder/cinder/utils.py", 
line 40, in 
  2016-10-02 23:32:34.691 | from os_brick import encryptors
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/__init__.py", line 
16, in 
  2016-10-02 23:32:34.691 | from os_brick.encryptors import nop
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/nop.py", line 16, 
in 
  2016-10-02 23:32:34.691 | from os_brick.encryptors import base
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/base.py", line 19, 
in 
  2016-10-02 23:32:34.691 | from os_brick import executor
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/executor.py", line 21, in 

  2016-10-02 23:32:34.691 | from os_brick.privileged import rootwrap as 
priv_rootwrap
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/privileged/__init__.py", line 
13, in 
  2016-10-02 23:32:34.691 | from oslo_privsep import capabilities as c
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/capabilities.py", line 73, 
in 
  2016-10-02 23:32:34.691 | ffi.cdef(CDEF)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 105, in cdef
  2016-10-02 23:32:34.692 | self._cdef(csource, override=override, 
packed=packed)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 119, in _cdef
  2016-10-02 23:32:34.692 | self._parser.parse(csource, override=override, 
**options)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 299, in parse
  2016-10-02 23:32:34.692 | self._internal_parse(csource)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 304, in 
_internal_parse
  2016-10-02 23:32:34.692 | ast, macros, csource = self._parse(csource)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 260, in _parse
  2016-10-02 23:32:34.692 | ast = _get_parser().parse(csource)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 40, in 
_get_parser
  2016-10-02 23:32:34.692 | _parser_cache = pycparser.CParser()
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/c_parser.py", line 87, in 
__init__
  2016-10-02 23:32:34.692 | outputdir=taboutputdir)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/c_lexer.py", line 66, in build
  2016-10-02 23:32:34.692 | self.lexer = lex.lex(object=self, **kwargs)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/ply/lex.py", line 911, in lex
  2016-10-02 23:32:34.692 | lexobj.readtab(lextab, ldict)
  2016-10-02 

[Yahoo-eng-team] [Bug 1606231] Re: [RFE] Support nova virt interface attach/detach

2016-09-14 Thread Sam Betts
** Summary changed:

- vif_port_id of ironic port is not updating after neutron port-delete
+ [RFE] Support nova virt interface attach/detach

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606231

Title:
  [RFE] Support nova virt interface attach/detach

Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is done from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1606231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589575] Re: nova-compute is not starting after adding ironic configurations

2016-09-14 Thread Sam Betts
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589575

Title:
  nova-compute is not starting after adding ironic configurations

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  I am trying to configure Ironic. I am doing this on Centos Mitaka.
  After writing the ironic configurations in nova.conf nova-compute is
  going down due to not able get the node list from ironic. tail -f
  /var/log/ironic/ironic-api.log 2016-06-06 08:46:37.634 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:37.635 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0009820
  2016-06-06 08:46:39.639 3749 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:39.640 3749 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0009780 2016-06-06 08:46:41.644 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:41.645 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0010638
  2016-06-06 08:46:43.649 3749 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:43.650 3749 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0009439 2016-06-06 08:46:45.656 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:45.657 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0014479
  2016-06-06 08:46:47.663 3748 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:47.664 3748 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0011349 2016-06-06 08:46:49.670 3749 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:49.670 3749 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0010211

  ironic in nova.conf 
  [DEFAULT] 
  compute_driver=ironic.IronicDriver
  firewall_driver=nova.virt.firewall.NoopFirewallDriver 
scheduler_host_manager=nova.scheduler.ironic_host_manager.IronicHostManager 
  ram_allocation_ratio=1.0 
  reserved_host_memory_mb=0 
  scheduler_use_baremetal_filters=True 
  scheduler_tracks_instance_changes=False 
  [ironic] 
  auth_uri = http://192.168.56.105:5000 
  auth_url = http://192.168.56.105:35357 
  auth_region = RegionOne 
  auth_type = password 
  project_domain_id = default 
  user_domain_id = default 
  project_name = service 
  username = ironic 
  password = cloud123 
  api_endpoint=http://192.168.56.105:6385/v1

  I tried with keeping the keystone V2.0 confgurations but that is giving 
authentication problems.
  admin_username=ironic
  admin_password=IRONIC_PASSWORD
  admin_url=http://IDENTITY_IP:35357/v2.0
  admin_tenant_name=service
  api_endpoint=http://IRONIC_NODE:6385/v1

  when i try the above values in ironic as per officeial document I am getting 
below error
   oslo_service.service DiscoveryFailure: Could not determine a suitable URL 
for the plugin

  Keystone endpoints are with v3
  
+--+---+--+--+-+---+--+
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL  |
  
+--+---+--+--+-+---+--+
  | 043d8c1077a14f8f970631e0ce5a95f6 | RegionOne | keystone | identity 
| True| internal  | http://192.168.56.105:5000/v3|

  
  Ironic api and conductor services are up and running.

  ironic node-list
  
+--+--+---+-++-+
 | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance 
| 
+--+--+---+-++-+
 
+--+--+---+-++-+

  Ironic api and conductor services are up and running.

  Please suggest me what is missing here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1589575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412454] Re: node with multi-port have unexpected behavior

2016-09-13 Thread Sam Betts
*** This bug is a duplicate of bug 1544169 ***
https://bugs.launchpad.net/bugs/1544169

** This bug is no longer a duplicate of bug 1405131
   Ports cannot be mapped to networks
** This bug has been marked a duplicate of bug 1544169
   [RFE] Advanced network configuration in ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412454

Title:
  node with multi-port have unexpected behavior

Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Create a node with multiple ports.
  Deployment create multiple config under pxelinux.cfg and update ports' opts 
to neutron dhcp opts. But neutron only generate opts for only one port.
  Because neutron only create one port for one instance, which only has one mac 
address.
  This is may caused by Nova.
  So we should disable multiple ports or enable neutron have multiple ports 
with one instance smoothly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1412454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572472] Re: Ironic Nova host manager Unsupported Operand

2016-04-20 Thread Sam Betts
** Description changed:

  There is a timeout issue that I see when running the Cisco Ironic Third
  party CI:
  
  Traceback (most recent call last):
    File "tempest/test.py", line 113, in wrapper
  return f(self, *func_args, **func_kwargs)
    File 
"/opt/stack/ironic/ironic_tempest_plugin/tests/scenario/test_baremetal_basic_ops.py",
 line 113, in test_baremetal_server_ops
  self.boot_instance()
    File 
"/opt/stack/ironic/ironic_tempest_plugin/tests/scenario/baremetal_manager.py", 
line 150, in boot_instance
  self.wait_node(self.instance['id'])
    File 
"/opt/stack/ironic/ironic_tempest_plugin/tests/scenario/baremetal_manager.py", 
line 116, in wait_node
  raise lib_exc.TimeoutException(msg)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Timed out waiting to get Ironic node by instance id 
0a252cd1-a020-40da-911b-2becd1820306
  
  On further investigation into the issue I see this error in the
  n-sch.log, which I expect is leading to the node not being available in
  nova, so the instance never gets assigned to the Ironic node:
  
  2016-04-20 08:41:02.953 ERROR oslo_messaging.rpc.dispatcher 
[req-e5ae7418-e311-491e-99b3-86aeaa0b4e3c tempest-BaremetalBasicOps-171486249 
tempest-BaremetalBasicOps-1004295680] Exception during message handling: 
unsupported operand type(s) for *: 'NoneType' and 'int'
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
150, in inner
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 104, in select_destinations
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher dests = 
self.driver.select_destinations(ctxt, spec_obj)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 53, in 
select_destinations
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
selected_hosts = self._schedule(context, spec_obj)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 104, in _schedule
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher hosts = 
self._get_all_host_states(elevated)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 145, in 
_get_all_host_states
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
self.host_manager.get_all_host_states(context)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 574, in 
get_all_host_states
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
self._get_instance_info(context, compute))
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 180, in update
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
_locked_update(self, compute, service, aggregates, inst_dict)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 169, in _locked_update
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
self._update_from_compute_node(compute)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/ironic_host_manager.py", line 44, in 
_update_from_compute_node
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
self.free_disk_mb = 

[Yahoo-eng-team] [Bug 1506948] Re: Release request of networking-cisco on stable/kilo: 2015.1.1

2015-11-04 Thread Sam Betts
** Changed in: networking-cisco
Milestone: None => 2.0.0

** Changed in: networking-cisco/kilo
   Status: New => Confirmed

** Changed in: networking-cisco/kilo
   Importance: Undecided => Medium

** Changed in: networking-cisco
Milestone: 2.0.0 => None

** No longer affects: networking-cisco/kilo

** Changed in: networking-cisco
Milestone: None => 1.1.0

** Changed in: networking-cisco
 Assignee: (unassigned) => Brian Demers (brian-demers)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506948

Title:
  Release request of networking-cisco on stable/kilo: 2015.1.1

Status in networking-cisco:
  Confirmed
Status in neutron:
  Confirmed

Bug description:
  
  In preparation for Liberty's semver changes, we want to re-release 2015.1.0 
as 1.0.0 (adding another tag at the same point: 
24edb9fd14584020a8b242a8b351befc5ddafb7e)

  
  New tag info:

  Branch:   stable/kilo
  From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
  New Tag:  1.1.0

  This release contains the following changes:

   d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
   f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: VLAN not 
created on switch
   d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
   0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubble up exceptions when 
Nexus replay enabled
   54fca8a047810304c69990dce03052e45f21cc23Quick retry connect to 
resolve stale ncclient handle
   0c496e1d7425984bf9686b11b5c0c9c8ece23bf3Update requirements.txt for 
ML2 Nexus
   393254fcfbe3165e4253801bc3be03e15201c36dUpdate requirements.txt
   75fd522b36f7b67dc4152e461f4e5dfa26b4ff31Remove duplicate entrypoints 
in setup.cfg
   178f40f2a43192687188661d5fcedf394321e191Cisco UCSM driver updates to 
handle duplicate creations
   11f5f29af3e5c4a2ed4b42471e32db49180693dfClean up of UCS Manager 
connection handle management.
   ad010718f978763e399f0bf9a0976ba51d3334ebFix Cisco CSR1KV script 
issues
   a8c4bd753ba254b062612c1bcd85000656ebfa44Replace retry count with 
replay failure stats
   db1bd250b95abfc267c8a75891ba56105cbeed8cAdd scripts to enable CSR 
FWaaS service
   f39c6a55613a274d6d0e67409533edefbca6f9a7Fix N1kv trunk driver: same 
mac assigned to ports created
   a118483327f7a217dfedfe69da3ef91f9ec6a169Update netorking-cisco files 
for due to neutrons port dictionary subnet being replaced with
   b60296644660303fb2341ca6495611621fc486e7ML2 cisco_nexus MD: Config 
hangs when replay enabled
   76f7be8758145c61e960ed37e5c93262252f56ffMove UCSM from extension 
drivers to mech drivers
   ffabc773febb9a8df7853588ae27a4fe3bc4069bML2 cisco_nexus MD: 
Multiprocess replay issue
   77d4a60fbce7f81275c3cdd9fec3b28a1ca0c57cML2 cisco_nexus MD: If 
configured, close ssh sessions
   825cf6d1239600917f8fa545cc3745517d363838Part II-Detect switch 
failure earlier-Port Create
   9b7b57097b2bd34f42ca5adce1e3342a91b4d3f8Retry count not reset on 
successful replay
   6afe5d8a6d11db4bc2db29e6a84dc709672b1d69ML2 Nexus decomposition not 
complete for Nexus
   ac84fcb861bd594a5a3773c32e06b3e58a729308Delete fails after switch 
reset (replay off)
   97720feb4ef4d75fa190a23ac10038d29582b001Call to get nexus type for 
Nexus 9372PX fails
   87fb3d6f75f9b0ae574df17b494421126a636199Detect switch failure 
earlier during port create
   b38e47a37977634df14846ba38aa38d7239a1adcEnable the CSR1kv devstack 
plugin for Kilo
   365cd0f94e579a4c885e6ea9c94f5df241fb2288Sanitize policy profile 
table on neutron restart
   4a6a4040a71096b31ca5c283fd0df15fb87aeb38Cisco Nexus1000V: Retry 
mechanism for VSM REST calls
   7bcec734cbc658f4cd0792c625aff1a3edc73208Moved N1kv section from 
neutron tree to stackforge
   4970a3e279995faf9aff402c96d4b16796a00ef5N1Kv: Force syncing BDs on 
neutron restart
   f078a701931986a2755d340d5f4a7cc2ab095bb3s/stackforge/openstack/g
   151f6f6836491b77e0e788089e0cf9edbe9b7e00Update .gitreview file for 
project rename
   876c25fbf7e3aa7f8a44dd88560a030e609648d5Bump minor version number to 
enable development
   a5e7f6a3f0f824ec313449273cf9b283cf1fd3b9Sync notification to VSM & 
major sync refactoring

  NOTE: this is a kilo release, so i'm not sure if we should follow the
  post versioning step in from:
  http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
  #sub-project-release-process

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1506948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1506948] Re: Release request of networking-cisco on stable/kilo: 2015.1.1

2015-11-04 Thread Sam Betts
** Also affects: networking-cisco/kilo
   Importance: Undecided
   Status: New

** Changed in: networking-cisco/kilo
Milestone: None => 1.1.0

** Changed in: networking-cisco
Milestone: 1.1.0 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506948

Title:
  Release request of networking-cisco on stable/kilo: 2015.1.1

Status in networking-cisco:
  Confirmed
Status in networking-cisco kilo series:
  New
Status in neutron:
  Confirmed

Bug description:
  
  In preparation for Liberty's semver changes, we want to re-release 2015.1.0 
as 1.0.0 (adding another tag at the same point: 
24edb9fd14584020a8b242a8b351befc5ddafb7e)

  
  New tag info:

  Branch:   stable/kilo
  From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
  New Tag:  1.1.0

  This release contains the following changes:

   d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
   f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: VLAN not 
created on switch
   d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
   0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubble up exceptions when 
Nexus replay enabled
   54fca8a047810304c69990dce03052e45f21cc23Quick retry connect to 
resolve stale ncclient handle
   0c496e1d7425984bf9686b11b5c0c9c8ece23bf3Update requirements.txt for 
ML2 Nexus
   393254fcfbe3165e4253801bc3be03e15201c36dUpdate requirements.txt
   75fd522b36f7b67dc4152e461f4e5dfa26b4ff31Remove duplicate entrypoints 
in setup.cfg
   178f40f2a43192687188661d5fcedf394321e191Cisco UCSM driver updates to 
handle duplicate creations
   11f5f29af3e5c4a2ed4b42471e32db49180693dfClean up of UCS Manager 
connection handle management.
   ad010718f978763e399f0bf9a0976ba51d3334ebFix Cisco CSR1KV script 
issues
   a8c4bd753ba254b062612c1bcd85000656ebfa44Replace retry count with 
replay failure stats
   db1bd250b95abfc267c8a75891ba56105cbeed8cAdd scripts to enable CSR 
FWaaS service
   f39c6a55613a274d6d0e67409533edefbca6f9a7Fix N1kv trunk driver: same 
mac assigned to ports created
   a118483327f7a217dfedfe69da3ef91f9ec6a169Update netorking-cisco files 
for due to neutrons port dictionary subnet being replaced with
   b60296644660303fb2341ca6495611621fc486e7ML2 cisco_nexus MD: Config 
hangs when replay enabled
   76f7be8758145c61e960ed37e5c93262252f56ffMove UCSM from extension 
drivers to mech drivers
   ffabc773febb9a8df7853588ae27a4fe3bc4069bML2 cisco_nexus MD: 
Multiprocess replay issue
   77d4a60fbce7f81275c3cdd9fec3b28a1ca0c57cML2 cisco_nexus MD: If 
configured, close ssh sessions
   825cf6d1239600917f8fa545cc3745517d363838Part II-Detect switch 
failure earlier-Port Create
   9b7b57097b2bd34f42ca5adce1e3342a91b4d3f8Retry count not reset on 
successful replay
   6afe5d8a6d11db4bc2db29e6a84dc709672b1d69ML2 Nexus decomposition not 
complete for Nexus
   ac84fcb861bd594a5a3773c32e06b3e58a729308Delete fails after switch 
reset (replay off)
   97720feb4ef4d75fa190a23ac10038d29582b001Call to get nexus type for 
Nexus 9372PX fails
   87fb3d6f75f9b0ae574df17b494421126a636199Detect switch failure 
earlier during port create
   b38e47a37977634df14846ba38aa38d7239a1adcEnable the CSR1kv devstack 
plugin for Kilo
   365cd0f94e579a4c885e6ea9c94f5df241fb2288Sanitize policy profile 
table on neutron restart
   4a6a4040a71096b31ca5c283fd0df15fb87aeb38Cisco Nexus1000V: Retry 
mechanism for VSM REST calls
   7bcec734cbc658f4cd0792c625aff1a3edc73208Moved N1kv section from 
neutron tree to stackforge
   4970a3e279995faf9aff402c96d4b16796a00ef5N1Kv: Force syncing BDs on 
neutron restart
   f078a701931986a2755d340d5f4a7cc2ab095bb3s/stackforge/openstack/g
   151f6f6836491b77e0e788089e0cf9edbe9b7e00Update .gitreview file for 
project rename
   876c25fbf7e3aa7f8a44dd88560a030e609648d5Bump minor version number to 
enable development
   a5e7f6a3f0f824ec313449273cf9b283cf1fd3b9Sync notification to VSM & 
major sync refactoring

  NOTE: this is a kilo release, so i'm not sure if we should follow the
  post versioning step in from:
  http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
  #sub-project-release-process

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1506948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512666] [NEW] Allow for per-subnet/network dhcp options

2015-11-03 Thread Sam Betts
Public bug reported:

[Existing Problem]
Neutron currently does not allow for DHCP options to be set which will affect 
any/all mac addresses in a subnet/network, DHCP options can only be set per 
port aka per mac address. In order to achieve this functionality right now it 
requires manually setting up in a non-neutron controlled DHCP server.

This is currently a factor complicating the setup for the Ironic
Inspector which requires non-mac address specific DHCP options to be set
in order to inspect hardware which we don't currently know the mac
addresses for, and we are running our own dnsmasq instance to provide
the required functionality.

[Solution]
Provide the ability to set extra-dhcp-opt on a subnet or network in addition to 
ports. Options set on a network will apply to any/every machine that uses DHCP 
inside that network however if port has extra-dhcp-opt set then conflicting 
options will take priority/override the network/subnet level options for that 
specific mac address.

[Related]
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512666

Title:
   Allow for per-subnet/network dhcp options

Status in neutron:
  New

Bug description:
  [Existing Problem]
  Neutron currently does not allow for DHCP options to be set which will affect 
any/all mac addresses in a subnet/network, DHCP options can only be set per 
port aka per mac address. In order to achieve this functionality right now it 
requires manually setting up in a non-neutron controlled DHCP server.

  This is currently a factor complicating the setup for the Ironic
  Inspector which requires non-mac address specific DHCP options to be
  set in order to inspect hardware which we don't currently know the mac
  addresses for, and we are running our own dnsmasq instance to provide
  the required functionality.

  [Solution]
  Provide the ability to set extra-dhcp-opt on a subnet or network in addition 
to ports. Options set on a network will apply to any/every machine that uses 
DHCP inside that network however if port has extra-dhcp-opt set then 
conflicting options will take priority/override the network/subnet level 
options for that specific mac address.

  [Related]
  https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1512666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493025] [NEW] Exception with log message overidden after being set in exception.handle

2015-09-07 Thread Sam Betts
Public bug reported:

If a message is passed into exception.handle user_message is set then
immediately replaced so the exception never makes it into the log
message.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493025

Title:
  Exception with log message overidden after being set in
  exception.handle

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If a message is passed into exception.handle user_message is set then
  immediately replaced so the exception never makes it into the log
  message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475297] [NEW] Unbind segment not working correctly

2015-07-16 Thread Sam Betts
Public bug reported:

A recent commit https://review.openstack.org/#/c/196908/21 changed the
order of some of the calls in update_port and its causing a failure of
segment unbind in the Cisco nexus driver.

** Affects: neutron
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: In Progress

** Description changed:

- A recent commit changed the order of some of the calls and its causing a
- failure of segment unbind in the Cisco nexus driver.
+ A recent commit https://review.openstack.org/#/c/196908/21 changed the
+ order of some of the calls and its causing a failure of segment unbind
+ in the Cisco nexus driver.

** Description changed:

  A recent commit https://review.openstack.org/#/c/196908/21 changed the
- order of some of the calls and its causing a failure of segment unbind
- in the Cisco nexus driver.
+ order of some of the calls in update_port and its causing a failure of
+ segment unbind in the Cisco nexus driver.

** Changed in: neutron
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475297

Title:
  Unbind segment not working correctly

Status in neutron:
  In Progress

Bug description:
  A recent commit https://review.openstack.org/#/c/196908/21 changed the
  order of some of the calls in update_port and its causing a failure of
  segment unbind in the Cisco nexus driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450090] [NEW] fip-linklocal-networks file not in .gitignore

2015-04-29 Thread Sam Betts
Public bug reported:

A file called fip-linklocal-networks is being created in the neutron
directory when I run the unit tests, this should be added to the
.gitignore file so that it isn't accidentally commited into the tree.

** Affects: neutron
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Sam Betts (sambetts)

** Summary changed:

- Add fip-linklocal-networks file to .gitignore
+ fip-linklocal-networks file not in .gitignore

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450090

Title:
  fip-linklocal-networks file not in .gitignore

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  A file called fip-linklocal-networks is being created in the neutron
  directory when I run the unit tests, this should be added to the
  .gitignore file so that it isn't accidentally commited into the tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443505] [NEW] Can create a volume even with No Volume Type selected

2015-04-13 Thread Sam Betts
Public bug reported:

Through the volume create modal I can create a volume without selecting
a volume type, and it selects lvmdriver-1 as the type, even though the
drop down was left as No volume type.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1443505

Title:
  Can create a volume even with No Volume Type selected

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Through the volume create modal I can create a volume without
  selecting a volume type, and it selects lvmdriver-1 as the type, even
  though the drop down was left as No volume type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1443505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442146] [NEW] Horizon Ajax cleanup

2015-04-09 Thread Sam Betts
Public bug reported:

Clean up of horizon Ajax code to allow for better management of many
tasks that have to wait for each other to finish.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1442146

Title:
  Horizon Ajax cleanup

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Clean up of horizon Ajax code to allow for better management of many
  tasks that have to wait for each other to finish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1442146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441576] Re: LoadBalancer not opening in horizon after network is deleted

2015-04-08 Thread Sam Betts
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441576

Title:
  LoadBalancer not opening in horizon after network is deleted

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Load Balancer fails to open in Horizon, after deleting the Assigned
  Network

  Steps:
  1)Create Network and Subnetwork
  2)Create pool in Load Balancer and assign a subnet
  3)Delete the Network assigned to the Pool
  4)load balancer failed to open from horizon.
 
  But load balancer opens after deleting the loadbalancer-pool from CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438133] Re: django-admin.py collectstatic failing to find static/themes/default

2015-04-02 Thread Sam Betts
** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1438133

Title:
  django-admin.py collectstatic failing to find static/themes/default

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in tripleo - openstack on openstack:
  Fix Committed

Bug description:
  At some stage over the last 10 days tripleo has stared to to fail to
  run

  DJANGO_SETTINGS_MODULE=openstack_dashboard.settings django-
  admin.py collectstatic --noinput

  Exiting with the following error

  os-collect-config: dib-run-parts Fri Mar 27 03:56:44 UTC 2015 Running 
/usr/libexec/os-refresh-config/post-configure.d/14-horizon
  os-collect-config: WARNING:root:dashboards and default_dashboard in 
(local_)settings is DEPRECATED now and may be unsupported in some future 
release. The preferred way to specify the order of dashboards and the default 
dashboard is the pluggable dashboard mechanism (in 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/openstack_dashboard/enabled,
 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/openstack_dashboard/local/enabled).
  os-collect-config: WARNING:py.warnings:DeprecationWarning: The oslo namespace 
package is deprecated. Please use oslo_config instead.
  os-collect-config: WARNING:py.warnings:DeprecationWarning: The oslo namespace 
package is deprecated. Please use oslo_serialization instead. 
  os-collect-config: Traceback (most recent call last):
  os-collect-config:   File /opt/stack/venvs/openstack/bin/django-admin.py, 
line 5, in module
  os-collect-config: management.execute_from_command_line()
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 399, in execute_from_command_line
  os-collect-config: utility.execute()
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
  os-collect-config: self.fetch_command(subcommand).run_from_argv(self.argv)
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 242, in run_from_argv
  os-collect-config: self.execute(*args, **options.__dict__)
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 285, in execute
  os-collect-config: output = self.handle(*args, **options)
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 415, in handle
  os-collect-config: return self.handle_noargs(**options)
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py,
 line 173, in handle_noargs
  os-collect-config: collected = self.collect()
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py,
 line 103, in collect
  os-collect-config: for path, storage in finder.list(self.ignore_patterns):
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py,
 line 106, in list
  os-collect-config: for path in utils.get_files(storage, ignore_patterns):
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/contrib/staticfiles/utils.py,
 line 25, in get_files
  os-collect-config: directories, files = storage.listdir(location)   
  os-collect-config:   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/django/core/files/storage.py,
 line 249, in listdir
  os-collect-config: for entry in os.listdir(path):
  os-collect-config: OSError: [Errno 2] No such file or directory: 
'/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/openstack_dashboard/local/static/themes/default'
  os-collect-config: [2015-03-27 03:56:45,548] (os-refresh-config) [ERROR] 
during post-configure phase. [Command '['dib-run-parts', 
'/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit 
status

  
  example here
  
http://logs.openstack.org/05/168205/1/check-tripleo/check-tripleo-ironic-undercloud-precise-nonha/3f384bd/logs/undercloud-undercloud_logs/os-collect-config.txt.gz#_Mar_27_03_56_45

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1438133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403424] Re: Fixed ip info shown for port even when dhcp is disabled

2015-03-31 Thread Sam Betts
** Changed in: horizon
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403424

Title:
  Fixed ip info shown for port even when dhcp is disabled

Status in OpenStack Dashboard (Horizon):
  Opinion
Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  As a user it is very confusing (especially in Horizon dashboard)
  having the IP address displayed even when the DHCP is disabled for the
  subnet the port belongs to: the user would expect that the IP address
  shown is actually assigned to the instance, but this is not the case,
  since the DHCP is disabled.

  I asked in the ML about this issue
  (http://lists.openstack.org/pipermail/openstack-
  dev/2014-December/053069.html) and I understood that neutron needs to
  reserve an IP address for a port even in the case it is not assigned,
  anyway I think this info should not be displayed or should be
  differently specified.

  In a first moment I thought about raising the bug against Horizon, but
  I feel the correct place to fix this is in neutron. Before to assign
  this bug to myself I would like to get some feedback by other
  developers, my idea for a possible solution is to add a boolean
  element assigned to fixed_ip dict with value False when in the
  subnet identified by subnet_id  DHCP is disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400819] Re: Create pseudo-folder - a folder with just / as a name gets create but doesn't show up

2015-03-31 Thread Sam Betts
** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1400819

Title:
  Create pseudo-folder - a folder with just / as a name gets create
  but doesn't show up

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  1. Login to Devstack
  2. Create the container C_1
  3. Create the pseduo folder with the name /

  Observe that success message is displayed but no new folder is
  created.

  Expected: Display error message for wrong folder name format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1400819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420744] [NEW] Improve and document the PageTitleMixin

2015-02-11 Thread Sam Betts
Public bug reported:

https://review.openstack.org/#/c/142802 adds new functionality to reduce
duplication in horizon templates, this bug is to track adding extra
documentation and readability to those new features.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420744

Title:
  Improve and document the PageTitleMixin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/#/c/142802 adds new functionality to
  reduce duplication in horizon templates, this bug is to track adding
  extra documentation and readability to those new features.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419010] [NEW] Thin the ML2 Cisco Nexus Driver

2015-02-06 Thread Sam Betts
Public bug reported:

Cisco vendor code is now moving out of the neutron tree and this bug
tracks the thinning of the Cisco nexus ml2 driver.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419010

Title:
  Thin the ML2 Cisco Nexus Driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Cisco vendor code is now moving out of the neutron tree and this bug
  tracks the thinning of the Cisco nexus ml2 driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418523] [NEW] Runtime Error in tests

2015-02-05 Thread Sam Betts
Public bug reported:

Intermittently I see RuntimeErrors in the test logs, this doesn't not
cause the tests to fail but might point to a deeper issue.

..Exception RuntimeError: 'maximum recursion depth exceeded while calling a 
Python object' in function remove at 0x8a723e4 ignored
Exception RuntimeError: 'maximum recursion depth exceeded while calling a 
Python object' in _ctypes.DictRemover object at 0x1538e4a0 ignored
.

** Affects: horizon
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1418523

Title:
  Runtime Error in tests

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Intermittently I see RuntimeErrors in the test logs, this doesn't not
  cause the tests to fail but might point to a deeper issue.

  ..Exception RuntimeError: 'maximum recursion depth exceeded while calling a 
Python object' in function remove at 0x8a723e4 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded while calling a 
Python object' in _ctypes.DictRemover object at 0x1538e4a0 ignored
  .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1418523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415576] [NEW] lock files in horizon not in gitignore

2015-01-28 Thread Sam Betts
Public bug reported:

Since new oslo_concurrency dependency has been added *.lock files have
been being left in my horizon development repository and are not being
gitignored

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1415576

Title:
  lock files in horizon not in gitignore

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Since new oslo_concurrency dependency has been added *.lock files have
  been being left in my horizon development repository and are not being
  gitignored

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1415576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413749] [NEW] Add page title mixin to all views that can use it

2015-01-22 Thread Sam Betts
Public bug reported:

This patch adds a mixin for page titles and uses it to replace views that set 
the page title in the context data:
https://review.openstack.org/#/c/142802

Many other views have titles defined in their templates leading to a lot
of duplicated code which could be reduced by using the page title mixin.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1413749

Title:
  Add page title mixin to all views that can use it

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This patch adds a mixin for page titles and uses it to replace views that set 
the page title in the context data:
  https://review.openstack.org/#/c/142802

  Many other views have titles defined in their templates leading to a
  lot of duplicated code which could be reduced by using the page title
  mixin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1413749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407981] [NEW] Several tests run slow because of missing API mocks causing them to wait for a timeout

2015-01-06 Thread Sam Betts
Public bug reported:

When calling run_tests.sh several tests take far longer to run than
others and on closer inspection of the specific tests, it appears that
in many cases the mocks for some of the API services have been left out,
but no error/timeout message is appearing in the logs because those
calls in the code are made inside of try except statements.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1407981

Title:
  Several tests run slow because of missing API mocks causing them to
  wait for a timeout

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When calling run_tests.sh several tests take far longer to run than
  others and on closer inspection of the specific tests, it appears that
  in many cases the mocks for some of the API services have been left
  out, but no error/timeout message is appearing in the logs because
  those calls in the code are made inside of try except statements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334447] Re: Potential race between neutron port update and node pxe boot

2014-12-02 Thread Sam Betts
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334447

Title:
  Potential race between neutron port update and node pxe boot

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Triaged
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Splitting off a new bug from Bug #1300589 to track progress on one
  specific issue.

  There is a potential race condition in the pxe driver where it will
  update neutron ports with DHCP data for a soon-to-be-booted ironic
  node.  This data sets, among other things, the tftp next server
  address.  The update is asynchronous on the Neutron side.  After the
  update request is sent, the node is immediately powered on.  If using
  the ssh power driver with fast booting virtual machines, there is a
  potential race where the node attempts pxe boot before the neutron
  agents have processed the updates and reconfigured DHCP servers
  appropriately.  Copying from the original bug, Robert describes the
  specific issues in the driver:

  This is the problem:
 _create_token_file(task, node)
  _update_neutron(task, node)
  manager_utils.node_set_boot_device(task, node, 'pxe', persistent=True)
  manager_utils.node_power_action(task, node, states.REBOOT)

  There's no synchronisation with neutron (neither poll nor call-back)
  to know that (all) the dnsmasq processes serving that port have been
  updated - and a VM that boots quickly may DHCP before the dnsmasq is
  hupped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1334447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371017] [NEW] Code comments still refer to tenant_id when the code says project_id

2014-09-18 Thread Sam Betts
Public bug reported:

Several comments in the openstack_dashboard code still refer to
tenant_id even after the code has been updated to use project_id this
can be confusing when reading the code without knowing the history. At
least one example of this is L93 in openstack_dashboard/policy.py

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371017

Title:
  Code comments still refer to tenant_id when the code says project_id

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Several comments in the openstack_dashboard code still refer to
  tenant_id even after the code has been updated to use project_id this
  can be confusing when reading the code without knowing the history. At
  least one example of this is L93 in openstack_dashboard/policy.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223361] Re: unclear error message for quota in horizon (possibly taking wrong one from logs)

2014-09-17 Thread Sam Betts
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223361

Title:
  unclear error message for quota in horizon (possibly taking wrong one
  from logs)

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  I tried running 100 large instances. 
  horizon shows the flowing error: 

  Error: Quota exceeded for cores,instances,ram: Requested 800, but
  already used 0 of 20 cores (HTTP 413) (Request-ID: req-0c854c89-89d9
  -48ce-96e2-eca07bbbc8f5)

  the first part of the message is fine (Quota exceeded for 
cores,instances,ram) but the second part is a bit unclear... 
  I asked to run 100 instances with the largest flavor (my host cannot support 
that), so the 800 value is not very clear
  also, 'but already used 0 of 20 cores' means none of the cores were used.

  this is the complete log, and I think that we possibly take the wrong
  errors:

  [root@opens-vdsb ~(keystone_admin)]# egrep 
0c854c89-89d9-48ce-96e2-eca07bbbc8f5 /var/log/*/*
  /var/log/horizon/horizon.log:Recoverable error: Quota exceeded for 
cores,instances,ram: Requested 800, but already used 0 of 20 cores (HTTP 413) 
(Request-ID: req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5)
  /var/log/nova/api.log:2013-09-10 16:35:09.049 WARNING nova.compute.api 
[req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5 5f38d619dfe744f0a8e08818033fc37e 
89f7caf549e04aec85c3a8737a43a37c] cores,instances,ram quota exceeded for 
89f7caf549e04aec85c3a8737a43a37c, tried to run 100 instances. Can only run 2 
more instances of this type.
  /var/log/nova/api.log:2013-09-10 16:35:09.049 INFO nova.api.openstack.wsgi 
[req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5 5f38d619dfe744f0a8e08818033fc37e 
89f7caf549e04aec85c3a8737a43a37c] HTTP exception thrown: Quota exceeded for 
cores,instances,ram: Requested 800, but already used 0 of 20 cores
  /var/log/nova/api.log:2013-09-10 16:35:09.050 INFO 
nova.osapi_compute.wsgi.server [req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5 
5f38d619dfe744f0a8e08818033fc37e 89f7caf549e04aec85c3a8737a43a37c] 10.35.101.10 
POST /v2/89f7caf549e04aec85c3a8737a43a37c/servers HTTP/1.1 status: 413 len: 
373 time: 0.1629190

  
  i think that we should be taking 'tried to run 100 instances. Can only run 2 
more instances of this type.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1223361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370077] Re: Set default vnic_type in neutron.

2014-09-16 Thread Sam Betts
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370077

Title:
  Set default vnic_type in neutron.

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  https://review.openstack.org/#/c/72334/ introduced binding:vnic_type
  into the neutron port binding extension. Ml2 plugin has been updated
  to support the vnic_type, but others may not. Nova expects every port
  has a correct vnic_type. Therefore, neutron should make sure each port
  to have this attribute set correctly. By default, the vnic_type should
  be VNIC_TYPE_NORMAL

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233368] Re: Unauthorized command: kill -9

2014-09-08 Thread Sam Betts
Fixed in master by https://review.openstack.org/56785

** Changed in: neutron
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1233368

Title:
  Unauthorized command: kill -9

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
   Stderr: '/usr/local/bin/neutron-rootwrap: Unauthorized command: kill
  -9 30601 (no filter matched)\n'

  
  2013-09-30 17:56:22.682 2684 ERROR neutron.agent.dhcp_agent [-] Unable to 
disable dhcp.
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/new/neutron/neutron/agent/dhcp_agent.py, line 126, in call_driver
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwargs)
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/new/neutron/neutron/agent/linux/dhcp.py, line 180, in disable
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent 
utils.execute(cmd, self.root_helper)
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/new/neutron/neutron/agent/linux/utils.py, line 61, in execute
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent raise 
RuntimeError(m)
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent RuntimeError: 
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'kill', '-9', 
'30601']
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent Exit code: 99
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent Stdout: ''
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent Stderr: 
'/usr/local/bin/neutron-rootwrap: Unauthorized command: kill -9 30601 (no 
filter matched)\n'
  2013-09-30 17:56:22.682 2684 TRACE neutron.agent.dhcp_agent 

  http://logs.openstack.org/03/47503/16/check/check-tempest-devstack-vm-
  neutron/0b35b85/logs/screen-q-dhcp.txt.gz?level=TRACE

  logstash query: Unauthorized command: kill -9 shows this has
  happened about 100 times in past 24 hours and sometimes happens on
  'successful' tempest runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1233368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360171] [NEW] Booting multiple instances image name disappears

2014-08-22 Thread Sam Betts
Public bug reported:

Using standard devstack with neutron, when booting multiple instances
(8) using the instances count field in the Launch instance form, while
the instances were spawning their image name field changed between the
actual image name and - multiple times and once all instances had
finished spawning some were left with the hyphen and some the actual
image name.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1360171

Title:
  Booting multiple instances image name disappears

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using standard devstack with neutron, when booting multiple instances
  (8) using the instances count field in the Launch instance form, while
  the instances were spawning their image name field changed between the
  actual image name and - multiple times and once all instances had
  finished spawning some were left with the hyphen and some the actual
  image name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1360171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358751] Re: neutron lb-healthmonitor-create argument timeout required and present. Neutron still complains. python-neutronclient==2.3.6

2014-08-19 Thread Sam Betts
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358751

Title:
  neutron lb-healthmonitor-create argument timeout required and
  present.  Neutron still complains.  python-neutronclient==2.3.6

Status in OpenStack Neutron (virtual network service):
  New
Status in Python client library for Neutron:
  New

Bug description:
  neutron lb-healthmonitor-create argument timeout required and
  present.  Neutron complains anyway.

  Bug exists in:
  python-neutronclient==2.3.6

  Bug does not exist in:
  python-neutronclient==2.3.5

  Log follows:
  (openstack)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
     [-c COLUMN] [--max-width integer]
     [--variable VARIABLE] [--prefix PREFIX]
     [--request-format {json,xml}]
     [--tenant-id TENANT_ID]
     [--admin-state-down]
     [--expected-codes EXPECTED_CODES]
     [--http-method HTTP_METHOD]
     [--url-path URL_PATH] --delay DELAY
     --max-retries MAX_RETRIES --timeout
     TIMEOUT --type {PING,TCP,HTTP,HTTPS}
  neutron lb-healthmonitor-create: error: argument --timeout is required
  (openstack)OSTML0204844:home$ neutron --version
  2.3.6
  (openstackdev)OSTML0204844:home$ pip install python-neutronclient==2.3.5
  Successfully installed python-neutronclient cliff simplejson cmd2 pyparsing
  (openstackdev)OSTML0204844:home$ neutron net-list
  
+--+---++
  | id   | name  | subnets  
  |
  
+--+---++
  | 871aceeb-720a-46b2-97fa-cdea90d0c963 | ext_net   | 
81041d54-7806-4d78-967e-36b47f8177a5 10.30.40.0/24 |
  | af9ed28b-acee-4b95-97d3-45a02322bdbf | ext-net2  | 
2f488c1a-6afd-4b0b-8006-cc1b2c3aaf2b 10.30.80.0/24 |
  | b1cd3520-e086-40ce-b524-c8da64320c4e | load-def-net1-125 | 
04d7fa44-c04e-46d7-811e-933df5477bb4 10.125.1.0/24 |
  
+--+---++
  (openstackdev)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  Created a new health_monitor:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | delay  | 6|
  | id | 46d4808c-7601-4928-8a55-e040a48a32e7 |
  | max_retries| 3|
  | pools  |  |
  | tenant_id  | ab98e98fc0474508b8f4a44ae05dc118 |
  | timeout| 5|
  | type   | TCP  |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354454] Re: First attempt at inline edit does not work

2014-08-12 Thread Sam Betts
Unable to replicate on horizon/master, marking as Invalid

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1354454

Title:
  First attempt at inline edit does not work

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  After opening a browser and bringing up the dashboard, navigate to the
  Admin  Projects panel and use inline edit to change one of the
  project names.  The new name does not seem to stick the first time you
  try.  If you try again it will work the second time and every time
  after that.  To make the problem happen again it seems you need to
  close the browser and reopen it.  I was able to reproduce this on
  Chrome, Firefox, and IE 10 so I don't think it's browser specific.

  I think this is causing one of the projects selenium tests to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1354454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355100] [NEW] TestSubresourcePlugin functions are double indented

2014-08-11 Thread Sam Betts
Public bug reported:

The code for the TestSubresourcePlugin class in
neutron/test/unit/test_api_v2.py is double indented which does not meet
pep8 requirements, however flake8 seems to ignore the problem in
automated testing.

** Affects: neutron
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355100

Title:
  TestSubresourcePlugin functions are double indented

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The code for the TestSubresourcePlugin class in
  neutron/test/unit/test_api_v2.py is double indented which does not
  meet pep8 requirements, however flake8 seems to ignore the problem in
  automated testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354829] Re: sudo: 3 incorrect password attempts in gate-neutron-python26

2014-08-11 Thread Sam Betts
After looking through logstash it would appear that this is not specific
to Neutron. Several of the Jenkins servers have suffered from this error
when testing different projects, including Neutron, Horizon and Tempest.

** Also affects: horizon
   Importance: Undecided
   Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354829

Title:
  sudo: 3 incorrect password attempts in gate-neutron-python26

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  http://logs.openstack.org/86/110186/1/check/gate-neutron-
  python26/9dec53b/console.html

  2014-08-10 06:51:57.341 | Started by user anonymous
  2014-08-10 06:51:57.343 | Building remotely on bare-centos6-rax-dfw-1380820 
in workspace /home/jenkins/workspace/gate-neutron-python26
  2014-08-10 06:51:57.458 | [gate-neutron-python26] $ /bin/bash -xe 
/tmp/hudson1812648831707400818.sh
  2014-08-10 06:51:57.540 | + rpm -ql libffi-devel
  2014-08-10 06:51:57.543 | /tmp/hudson1812648831707400818.sh: line 2: rpm: 
command not found
  2014-08-10 06:51:57.543 | + sudo yum install -y libffi-devel
  2014-08-10 06:51:57.549 | sudo: no tty present and no askpass program 
specified
  2014-08-10 06:51:57.551 | Sorry, try again.
  2014-08-10 06:51:57.552 | sudo: no tty present and no askpass program 
specified
  2014-08-10 06:51:57.552 | Sorry, try again.
  2014-08-10 06:51:57.553 | sudo: no tty present and no askpass program 
specified
  2014-08-10 06:51:57.553 | Sorry, try again.
  2014-08-10 06:51:57.553 | sudo: 3 incorrect password attempts
  2014-08-10 06:51:57.571 | Build step 'Execute shell' marked build as failure

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcInN1ZG86IDMgaW5jb3JyZWN0IHBhc3N3b3JkIGF0dGVtcHRzXCIgQU5EIGZpbGVuYW1lOiBcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA3NjcyNjY4NDc3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1354829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp