[Yahoo-eng-team] [Bug 1501447] [NEW] QEMU built-in iscsi initiator support should be version-constrained in the driver

2015-09-30 Thread Matt Riedemann
Public bug reported:

This spec was approved in kilo:

http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
/qemu-built-in-iscsi-initiator.html

With the code change here:

https://review.openstack.org/#/c/135854/

The spec and code change says:

"QEMU binary of Ubuntu 14.04 doesn’t have iSCSI support. Users have to
install libiscsi2 package and libiscsi-dev from Debian and rebuild QEMU
binary with libiscsi support by themselves."

This is a pretty terrible way of determining if this can be supported.
It also basically says if you're not using ubuntu/debian you're on your
own for figuring out what version of qemu (and what version your distro
supports) is required to make this work.

This should have really had a version constraint in the driver code such
that if the version of qemu is not new enough we can't support the
volume backend.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: libvirt volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501447

Title:
  QEMU built-in iscsi initiator support should be version-constrained in
  the driver

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This spec was approved in kilo:

  http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
  /qemu-built-in-iscsi-initiator.html

  With the code change here:

  https://review.openstack.org/#/c/135854/

  The spec and code change says:

  "QEMU binary of Ubuntu 14.04 doesn’t have iSCSI support. Users have to
  install libiscsi2 package and libiscsi-dev from Debian and rebuild
  QEMU binary with libiscsi support by themselves."

  This is a pretty terrible way of determining if this can be supported.
  It also basically says if you're not using ubuntu/debian you're on
  your own for figuring out what version of qemu (and what version your
  distro supports) is required to make this work.

  This should have really had a version constraint in the driver code
  such that if the version of qemu is not new enough we can't support
  the volume backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501451] [NEW] Incosistency in dhcp-agent when filling hosts and opts files

2015-09-30 Thread Alexey I. Froloff
Public bug reported:

We have bunch of subnets created in pre-Icehouse era, that have
ipv6_address_mode and ipv6_ra_mode unset.  For dhcpv6 functionality we
rely on enable_dhcp setting for a subnet.  However, in _iter_hosts port
is skipped iff ipv6_address_mode set to SLAAC, but in
_generate_opts_per_subnet subnet is skipped when ipv6_address_mode id
SLAAC or unset.

Since we can not update ipv6_address_mode attribute in existing subnets
(allow_put is False), this breaks DHCPv6 for these VMs.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501451

Title:
  Incosistency in dhcp-agent when filling hosts and opts files

Status in neutron:
  New

Bug description:
  We have bunch of subnets created in pre-Icehouse era, that have
  ipv6_address_mode and ipv6_ra_mode unset.  For dhcpv6 functionality we
  rely on enable_dhcp setting for a subnet.  However, in _iter_hosts
  port is skipped iff ipv6_address_mode set to SLAAC, but in
  _generate_opts_per_subnet subnet is skipped when ipv6_address_mode id
  SLAAC or unset.

  Since we can not update ipv6_address_mode attribute in existing
  subnets (allow_put is False), this breaks DHCPv6 for these VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 884482] Re: "Launch another instance like this"

2015-09-30 Thread Doug Fish
this wishlist bug has been open nearly 4 years without any activity
(other than assigning/unassigning). I'm going to move it to "Opinion /
Wishlist", which is an easily-obtainable queue of older requests that
have come on. This bug can be reopened (set back to "New") if someone
decides to work on this.

** Changed in: horizon
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/884482

Title:
  "Launch another instance like this"

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  There are cases where it would be very convenient to be able to simply
  say "launch another instance like this" after you've launched one...
  Can we capture all the information needed to clone it just as is or
  would we have to store some extra information (like user data?) in
  order to make this work?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/884482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321785] Re: RFE: block_device_info dict should have a password key rather than clear password

2015-09-30 Thread Matt Riedemann
** No longer affects: nova/icehouse

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321785

Title:
  RFE: block_device_info dict should have a password key rather than
  clear password

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  See bug 1319943 and the related patch
  https://review.openstack.org/#/c/93787/ for details, but right now the
  block_device_info dict passed around in the nova virt driver can
  contain a clear text password for the auth_password key.

  That bug and patch are masking the password when logged in the
  immediate known locations, but this could continue to crop up so we
  should change the design such that the block_device_info dict doesn't
  contain the password but rather a key to a store that nova can
  retrieve the password for use.

  Comment from Daniel Berrange in the patch above:

  "Long term I think we need to figure out a way to remove the passwords
  from any data dicts we pass around. Ideally the block device info
  would merely contain something like a UUID to identify a password,
  which Nova could use to fetch the actual password from a secure
  password manager service at time of use. Thus we wouldn't have to
  worry about random objects/dicts containing actual passwords.
  Obviously this isn't something we can do now, but could you file an
  RFE to address this from a design POV, because masking passwords at
  time of logging call is not really a viable long term strategy IMHO."

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501211] [NEW] subnet quota usage count wrong

2015-09-30 Thread Maurice Schreiber
Public bug reported:

The newly introduced disabling of the subnet create button
(https://review.openstack.org/#/c/121935/) does not respect the right
quota usages.

Apparently all subnets within a domain (including the ones of shared
networks) are counted to decide against the subnet quota if creation of
new subnets should be allowed.

But it should only be the subnets of the current project (counting the
shared networks is arguable).

Illustrating example:

I have got a domain 'Colors' with projects 'Blue' and 'Green'.

In project Blue are 10 subnets, in project Green 0.
Subnet Quota in both projects is 10 - nevertheless the 'Create Subnet' Button 
is disabled in both projects (but it shouldn't in project Green).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501211

Title:
  subnet quota usage count wrong

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The newly introduced disabling of the subnet create button
  (https://review.openstack.org/#/c/121935/) does not respect the right
  quota usages.

  Apparently all subnets within a domain (including the ones of shared
  networks) are counted to decide against the subnet quota if creation
  of new subnets should be allowed.

  But it should only be the subnets of the current project (counting the
  shared networks is arguable).

  Illustrating example:

  I have got a domain 'Colors' with projects 'Blue' and 'Green'.

  In project Blue are 10 subnets, in project Green 0.
  Subnet Quota in both projects is 10 - nevertheless the 'Create Subnet' Button 
is disabled in both projects (but it shouldn't in project Green).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501233] [NEW] DB downgrade doesn't be supported in OpenStack Now

2015-09-30 Thread wangxiyuan
Public bug reported:

As downgrade are not supported after Kilo with OpenStack,  we should
remove them now.

Roll backs can be performed as mentioned in the below link:

http://docs.openstack.org/openstack-ops/content/ops_upgrades-roll-
back.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1501233

Title:
  DB downgrade doesn't be supported in OpenStack Now

Status in Glance:
  New

Bug description:
  As downgrade are not supported after Kilo with OpenStack,  we should
  remove them now.

  Roll backs can be performed as mentioned in the below link:

  http://docs.openstack.org/openstack-ops/content/ops_upgrades-roll-
  back.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1501233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501202] [NEW] Error spelling of "explicitely"

2015-09-30 Thread JuPing
Public bug reported:

There are some incorrect spellings in the below files:
  nova/etc/nova/rootwrap.conf
#LIne10: # explicitely specify a full path (separated by ',')
  
  nova/nova/virt/xenapi/vmops.py
#Line1961: Missing paths are ignored, unless explicitely stated not to...
 
  nova/nova/objects/instance_group.py
#Line138: # field explicitely, we prefer to raise an Exception so the 
developer...

  nova/nova/objects/pci_device.py
#Line137: # obj_what_changed, set it explicitely...

I think the word "explicitely" should be spelled as "explicitly".

** Affects: nova
 Importance: Undecided
 Assignee: JuPing (jup-fnst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => JuPing (jup-fnst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501202

Title:
  Error spelling of "explicitely"

Status in OpenStack Compute (nova):
  New

Bug description:
  There are some incorrect spellings in the below files:
nova/etc/nova/rootwrap.conf
  #LIne10: # explicitely specify a full path (separated by ',')

nova/nova/virt/xenapi/vmops.py
  #Line1961: Missing paths are ignored, unless explicitely stated not to...
   
nova/nova/objects/instance_group.py
  #Line138: # field explicitely, we prefer to raise an Exception so the 
developer...

nova/nova/objects/pci_device.py
  #Line137: # obj_what_changed, set it explicitely...

  I think the word "explicitely" should be spelled as "explicitly".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501216] [NEW] Error spelling of "accomodate"

2015-09-30 Thread JuPing
Public bug reported:

There are some incorrect spellings in the below files:
  neutron/neutron/db/migration/cli.py
#Line242: # NOTE(ihrachyshka): this hack is temporary to accomodate those

  neutron/doc/source/devref/quality_of_service.rst
#Line170: ...addition of objects for new rule types. To accomodate this, 
fields common to...

I think the word "accomodate " should be spelled as "accommodate".

** Affects: neutron
 Importance: Undecided
 Assignee: JuPing (jup-fnst)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => JuPing (jup-fnst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501216

Title:
  Error spelling of "accomodate"

Status in neutron:
  New

Bug description:
  There are some incorrect spellings in the below files:
neutron/neutron/db/migration/cli.py
  #Line242: # NOTE(ihrachyshka): this hack is temporary to accomodate those

neutron/doc/source/devref/quality_of_service.rst
  #Line170: ...addition of objects for new rule types. To accomodate this, 
fields common to...

  I think the word "accomodate " should be spelled as "accommodate".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501226] [NEW] Tempest failure ( Brocade plugin)

2015-09-30 Thread deepak kumar
Public bug reported:

The instances where the tempest failure is happening is as below.

1) The plugin is not entertaining the network creation apart from VLAN network. 
It is raising exception. For example, flat network.
2) The return type for the API "add_router_interface" is a boolean, which needs 
to be a dictionary object.
3) "create_dhcp_port" is ending with time out.

** Affects: neutron
 Importance: Undecided
 Assignee: deepak kumar (deepakk)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501226

Title:
  Tempest failure ( Brocade plugin)

Status in neutron:
  New

Bug description:
  The instances where the tempest failure is happening is as below.

  1) The plugin is not entertaining the network creation apart from VLAN 
network. It is raising exception. For example, flat network.
  2) The return type for the API "add_router_interface" is a boolean, which 
needs to be a dictionary object.
  3) "create_dhcp_port" is ending with time out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501186] [NEW] excessive attribute checks in extension processing

2015-09-30 Thread YAMAMOTO Takashi
Public bug reported:

now ExtensionDescriptor defines the contract for extensions,
most of attribute checks in extension processing should not be necessary.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501186

Title:
  excessive attribute checks in extension processing

Status in neutron:
  In Progress

Bug description:
  now ExtensionDescriptor defines the contract for extensions,
  most of attribute checks in extension processing should not be necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501513] [NEW] Controllers in app/tech-debt are not being tested

2015-09-30 Thread Rajat Vig
Public bug reported:

app/tech-debt/image-form.controller.js and app/tech-debt/hz-namespace-
resource-type-form.controller.js are not being covered at all by tests

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501513

Title:
  Controllers in app/tech-debt are not being tested

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  app/tech-debt/image-form.controller.js and app/tech-debt/hz-namespace-
  resource-type-form.controller.js are not being covered at all by tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197570] Re: Update instructions for database migration with additional steps that may be needed

2015-09-30 Thread Henry Gessau
I will close it with a doc update soon.

** Changed in: neutron
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1197570

Title:
  Update instructions for database migration with additional steps that
  may be needed

Status in neutron:
  In Progress

Bug description:
  These instructions

  https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

  (and the README referred to therein) do not mention that backend-
  specific methods will be generated in the migration script, and that
  the developer must replace them with sa.* abstract methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1197570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197570] Re: Update instructions for database migration with additional steps that may be needed

2015-09-30 Thread Armando Migliaccio
We no longer keep this info in the wiki.

** Tags removed: documentation
** Tags added: doc

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Akanksha (akanksha-aha) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1197570

Title:
  Update instructions for database migration with additional steps that
  may be needed

Status in neutron:
  Invalid

Bug description:
  These instructions

  https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

  (and the README referred to therein) do not mention that backend-
  specific methods will be generated in the migration script, and that
  the developer must replace them with sa.* abstract methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1197570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501515] [NEW] mysql exception DBReferenceError is raised when calling update_dvr_port_binding

2015-09-30 Thread Adolfo Duarte
Public bug reported:

several tests upstream are failing with a DBReferenceError exception when calls 
to 
File "/opt/stack/new/neutron/neutron/plugins/ml2/db.py", line 208, in 
ensure_dvr_port_binding

This appears to be caused by accessing database resource without locking
it (race condition).

Here is an excerpt of the error: 
2015-09-29 18:39:00.822 7813 ERROR oslo_messaging.rpc.dispatcher 
DBReferenceError: (pymysql.err.IntegrityError) (1452, u'Cannot add or update a 
child row: a foreign key constraint fails (`neutron`.`ml2_dvr_port_bindings`, 
CONSTRAINT `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES 
`ports` (`id`) ON DELETE CASCADE)') [SQL: u'INSERT INTO ml2_dvr_port_bindings 
(port_id, host, router_id, vif_type, vif_details, vnic_type, profile, status) 
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: 
(u'851c0627-5133-43e2-b7a3-da9c29afd4ea', 
u'devstack-trusty-hpcloud-b2-5150973', u'973254dc-d1aa-4177-b952-2ac648bad4b5', 
'unbound', '', 'normal', '', 'DOWN')]

An example of the failure can be found here: 
http://logs.openstack.org/17/227517/3/check/gate-tempest-dsvm-neutron-dvr/fc1efa2/logs/screen-q-svc.txt.gz?level=ERROR

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501515

Title:
  mysql exception DBReferenceError is raised when calling
  update_dvr_port_binding

Status in neutron:
  New

Bug description:
  several tests upstream are failing with a DBReferenceError exception when 
calls to 
  File "/opt/stack/new/neutron/neutron/plugins/ml2/db.py", line 208, in 
ensure_dvr_port_binding

  This appears to be caused by accessing database resource without
  locking it (race condition).

  Here is an excerpt of the error: 
  2015-09-29 18:39:00.822 7813 ERROR oslo_messaging.rpc.dispatcher 
DBReferenceError: (pymysql.err.IntegrityError) (1452, u'Cannot add or update a 
child row: a foreign key constraint fails (`neutron`.`ml2_dvr_port_bindings`, 
CONSTRAINT `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES 
`ports` (`id`) ON DELETE CASCADE)') [SQL: u'INSERT INTO ml2_dvr_port_bindings 
(port_id, host, router_id, vif_type, vif_details, vnic_type, profile, status) 
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: 
(u'851c0627-5133-43e2-b7a3-da9c29afd4ea', 
u'devstack-trusty-hpcloud-b2-5150973', u'973254dc-d1aa-4177-b952-2ac648bad4b5', 
'unbound', '', 'normal', '', 'DOWN')]

  An example of the failure can be found here: 
  
http://logs.openstack.org/17/227517/3/check/gate-tempest-dsvm-neutron-dvr/fc1efa2/logs/screen-q-svc.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-09-30 Thread John L. Villalovos
** Also affects: ironic-lib
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  New
Status in neutron:
  New

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418452] Re: Unnecessary eye icon In Dashboard Horizon Admin --> change password

2015-09-30 Thread Neela Shah
Cannot reproduce this bug on liberty. The "eye" icon is there by design
- it is a toggle for viewing the pwd or hiding it.

This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to
''New''.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1418452

Title:
  Unnecessary eye icon In Dashboard Horizon  Admin --> change password

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When you sign in to dashboard  under :
  Admin --> change password
  you can see near  "Description"   - eye icon which is unnecessary , it should 
be removed .

  Suggestion: add  legend for the blue mark  *  = which means mandatory field
  Version:
  [root@puma15 ~(keystone_admin)]# rpm -qa | grep horizon
  python-django-horizon-2014.2.1-5.el7ost.noarch
  [root@puma15 ~(keystone_admin)]# rpm -qa | grep rhel
  libreport-rhel-2.1.11-10.el7.x86_64
  [root@puma15 ~(keystone_admin)]# 
   
  attached screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1418452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501505] [NEW] Allow updating of TLS refs

2015-09-30 Thread Phillip Toohill
Public bug reported:

A bug prevented updating of default_tls_container_ref and failing
with a 503
This bug uncovered a few other issues with null key checks
and complaints if sni_container_refs were not provided.

** Affects: neutron
 Importance: Undecided
 Assignee: Phillip Toohill (phillip-toohill)
 Status: In Progress


** Tags: lbaas liberty-rc-potential

** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501505

Title:
  Allow updating of TLS refs

Status in neutron:
  In Progress

Bug description:
  A bug prevented updating of default_tls_container_ref and failing
  with a 503
  This bug uncovered a few other issues with null key checks
  and complaints if sni_container_refs were not provided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501505] [NEW] Allow updating of TLS refs

2015-09-30 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

A bug prevented updating of default_tls_container_ref and failing
with a 503
This bug uncovered a few other issues with null key checks
and complaints if sni_container_refs were not provided.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Allow updating of TLS refs
https://bugs.launchpad.net/bugs/1501505
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501285] [NEW] can't distinguis between some admin and non admin pages

2015-09-30 Thread Martin Pavlásek
Public bug reported:

There is mechanism in integration tests to dynamically generate
go_to_somewhere methods to navigate among pages. But this methods
consider just last two levels of menu, so there is problem to
distinguish between:

Project - Compute - Volume - Volumes tab: Project/Compute/Volumes/Volumes
produces: go_to_volumes_volumespage()

Admin - System - Volume: Admin/System/Volumes/Volumes, go_to_volumes_volumespage
produces: go_to_volumes_volumespage()

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests

** Description changed:

  There is mechanism in integration tests to dynamically generate
  go_to_somewhere methods to navigate among pages. But this methods
  consider just last two levels of menu, so there is problem to
  distinguish between:
  
- Project - Compute - Volume - Volumes tab: Project/Compute/Volumes/Volumes, 
produce go_to_volumes_volumespage()
- Admin-System-Volume: Admin/System/Volumes/Volumes, go_to_volumes_volumespage, 
produce go_to_volumes_volumespage()
+ Project - Compute - Volume - Volumes tab: Project/Compute/Volumes/Volumes
+ produces: go_to_volumes_volumespage()
+ 
+ Admin - System - Volume: Admin/System/Volumes/Volumes, 
go_to_volumes_volumespage
+ produces: go_to_volumes_volumespage()

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501285

Title:
  can't distinguis between some admin and non admin pages

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is mechanism in integration tests to dynamically generate
  go_to_somewhere methods to navigate among pages. But this methods
  consider just last two levels of menu, so there is problem to
  distinguish between:

  Project - Compute - Volume - Volumes tab: Project/Compute/Volumes/Volumes
  produces: go_to_volumes_volumespage()

  Admin - System - Volume: Admin/System/Volumes/Volumes, 
go_to_volumes_volumespage
  produces: go_to_volumes_volumespage()

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501238] [NEW] Unable to create a project at first time

2015-09-30 Thread Sakthi Saravanakumar P
Public bug reported:

After installing all the components of openstack, I entered into the
portal with admin credentials. Initially I required to create a project
for a tenant, So I tried to create a project it throws an error "Danger
Unable to create a project".

Then I created a new user, after that I created a project, it is
created. It is expecting that at-least one normal user should be found.
Due to this I want to experiment more, I deleted all the normal user and
projects then I created a project, now It is created. working fine.

Bug:

Initially it expects at-least one user should be created before the
project. But the workflow says that you should add at least one project
before adding users.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501238

Title:
  Unable to create a project at first time

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After installing all the components of openstack, I entered into the
  portal with admin credentials. Initially I required to create a
  project for a tenant, So I tried to create a project it throws an
  error "Danger Unable to create a project".

  Then I created a new user, after that I created a project, it is
  created. It is expecting that at-least one normal user should be
  found.  Due to this I want to experiment more, I deleted all the
  normal user and projects then I created a project, now It is created.
  working fine.

  Bug:

  Initially it expects at-least one user should be created before the
  project. But the workflow says that you should add at least one
  project before adding users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501328] [NEW] Replace the existing default subnetpool configuration options with an admin-only API

2015-09-30 Thread John Davidge
Public bug reported:

During the Liberty cycle a consensus was reached within the neutron l3
subteam that the existing method for setting default subnetpools in
neutron.conf was not working as well as it could.

As an admin, I want to be able to set the default subnetpool via the
neutron API, and without the need to restart services for the change to
take effect.

The proposed solution is to add a new boolean 'is_default' field to the
subnetpool object. This field will be False by default, configurable
only by the admin, and initially can only be True for one subnetpool in
each IP family. Future work could allow multiple defaults once RBAC is
implemented for subnetpools.

This change will greatly improve the default subnetpool workflow for
admins, as well as exposing information about the default subnetpool to
users.

** Affects: neutron
 Importance: Undecided
 Assignee: John Davidge (john-davidge)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => John Davidge (john-davidge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501328

Title:
  Replace the existing default subnetpool configuration options with an
  admin-only API

Status in neutron:
  New

Bug description:
  During the Liberty cycle a consensus was reached within the neutron l3
  subteam that the existing method for setting default subnetpools in
  neutron.conf was not working as well as it could.

  As an admin, I want to be able to set the default subnetpool via the
  neutron API, and without the need to restart services for the change
  to take effect.

  The proposed solution is to add a new boolean 'is_default' field to
  the subnetpool object. This field will be False by default,
  configurable only by the admin, and initially can only be True for one
  subnetpool in each IP family. Future work could allow multiple
  defaults once RBAC is implemented for subnetpools.

  This change will greatly improve the default subnetpool workflow for
  admins, as well as exposing information about the default subnetpool
  to users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501338] [NEW] Add VLAN Transparency tests for Neutron API v2

2015-09-30 Thread Rishabh Das
Public bug reported:

VLAN transparency tests are missing. These need to be added.

Tests to be Implemented:

1. Create VLAN transparent Network
2. Show VLAN Transparent Networks
3. Delete VLAN Transparent Networks

** Affects: neutron
 Importance: Undecided
 Assignee: Rishabh Das (rishabh5290)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rishabh Das (rishabh5290)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501338

Title:
  Add VLAN Transparency tests for Neutron API v2

Status in neutron:
  New

Bug description:
  VLAN transparency tests are missing. These need to be added.

  Tests to be Implemented:

  1. Create VLAN transparent Network
  2. Show VLAN Transparent Networks
  3. Delete VLAN Transparent Networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421098] Re: ofagent: test_update_instance_port_admin_state failure

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421098

Title:
  ofagent: test_update_instance_port_admin_state failure

Status in networking-ofagent:
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  recently introduced tempest test_update_instance_port_admin_state test
  case uncovered a bug in ofagent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1421098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322588] Re: ERROR neutron.plugins.ofagent.agent.ofa_neutron_agent [req-fcf40f8b-7082-4e43-ad5b-e74c0824a1df None] Agent terminated!: Failed to get a datapath.

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-ofagent
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1322588

Title:
  ERROR neutron.plugins.ofagent.agent.ofa_neutron_agent [req-
  fcf40f8b-7082-4e43-ad5b-e74c0824a1df None] Agent terminated!: Failed
  to get a datapath.

Status in networking-ofagent:
  New

Bug description:
  ERROR neutron.plugins.ofagent.agent.ofa_neutron_agent 
[req-fcf40f8b-7082-4e43-ad5b-e74c0824a1df None] 
  Agent terminated!: Failed to get a datapath.

  When I start the neutron-ofagent
  neutron-ofagent-agent --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini

  It gives the error saying 
  "ERROR neutron.plugins.ofagent.agent.ofa_neutron_agent 
[req-fcf40f8b-7082-4e43-ad5b-e74c0824a1df None] Agent terminated!: Failed to 
get a datapath."

  This error is at ryu/ofctl/service.py in def _handle_get_datapath().

  "/usr/local/lib/python2.7/dist-packages/ryu/app/ofctl/service.py"

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1322588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293265] Re: OFAgent: Apply the patches for the "OVS agent loop slowdown" problem

2015-09-30 Thread Armando Migliaccio
Should this be still a neutron bug, please retarget to the Neutron
project.

** Also affects: networking-ofagent
   Importance: Undecided
   Status: New

** No longer affects: networking-ofagent

** Also affects: networking-ofagent
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293265

Title:
  OFAgent: Apply the patches for the "OVS agent loop slowdown" problem

Status in networking-ofagent:
  New
Status in neutron icehouse series:
  Fix Released

Bug description:
  This report is made for applying patch fixed for OVS agent to OFAgent.
  OFAgent is based an ovs agent.
  And OFAgent has same problem fixed on current OVS agent and it should be 
fixed.
  This report aims fixing the following reports for OFAgent.

  https://bugs.launchpad.net/neutron/+bug/1253993

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1293265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500361] Re: Generated config files are completely wrong

2015-09-30 Thread Ian Cordasco
** Also affects: glance/liberty
   Importance: High
 Assignee: Erno Kuvaja (jokke)
   Status: Fix Committed

** Also affects: glance/mitaka
   Importance: Undecided
   Status: New

** Changed in: glance/mitaka
   Status: New => Fix Committed

** Changed in: glance/mitaka
   Importance: Undecided => High

** Changed in: glance/liberty
   Importance: High => Critical

** Changed in: glance/liberty
   Status: Fix Committed => In Progress

** Changed in: glance/mitaka
 Assignee: (unassigned) => Erno Kuvaja (jokke)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1500361

Title:
  Generated config files are completely wrong

Status in Glance:
  In Progress
Status in Glance liberty series:
  In Progress
Status in Glance mitaka series:
  Fix Committed

Bug description:
  The files generated using oslo-config-generator are completely wrong.
  For example, it is missing [keystone_authtoken] and many more. This
  shows on the example config in git (ie: etc/glance-api.conf in
  Glance's git repo).

  I believe the generator's config files is missing --namespace
  keystonemiddleware.auth_token (maybe instead of
  keystoneclient.middleware.auth_token).

  IMO, this is a critical issue, which should be addressed with highest
  priority. This blocks me from testing Liberty rc1 in Debian.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1500361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361211] Re: Hyper-V agent does not add new VLAN ids to the external port's trunked list on Hyper-V 2008 R2

2015-09-30 Thread Armando Migliaccio
** Tags removed: hyper-v

** Also affects: networking-hyperv
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361211

Title:
  Hyper-V agent does not add new VLAN ids to the external port's trunked
  list on Hyper-V 2008 R2

Status in networking-hyperv:
  New

Bug description:
  This issue affects Hyper-V 2008 R2 and does not affect Hyper-V 2012
  and above.

  The Hyper-V agent is correctly setting the VLAN ID and access mode
  settings on the vmswitch ports associated with a VM, but not on the
  trunked list associated with an external port. This is a required
  configuration.

  A workaround consists in setting the external port trunked list to
  contain all possible VLAN ids expected to be used in neutron's network
  configuration as provided by the following script:

  https://github.com/cloudbase/devstack-hyperv-
  incubator/blob/master/trunked_vlans_workaround_2008r2.ps1

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1361211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374108] Re: Hyper-V agent cannot disconnect orphaned switch ports

2015-09-30 Thread Armando Migliaccio
** Tags removed: hyper-v

** Also affects: networking-hyperv
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374108

Title:
  Hyper-V agent cannot disconnect orphaned switch ports

Status in networking-hyperv:
  New

Bug description:
  On Windows / Hyper-V Server 2008 R2, when a switch port have to be 
disconnected because the VM using it was removed,
  DisconnectSwitchPort will fail, returning an error code and a HyperVException 
is raised. If the exception is raised, the switch port is not removed and will 
make the WMI operations more expensive.

  If the VM's VNIC has been removed, disconnecting the switch port is no
  longer necessary and it should be removed.

  Trace:
  http://paste.openstack.org/show/115297/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1374108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466547] Re: Hyper-V: Cannot add ICMPv6 security group rule

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

** Tags removed: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466547

Title:
  Hyper-V: Cannot add ICMPv6 security group rule

Status in networking-hyperv:
  Fix Committed

Bug description:
  Security Group rules created with ethertype 'IPv6' and protocol 'icmp'
  cannot be added by the Hyper-V Security Groups Driver, as it cannot
  add rules with the protocol 'icmpv6'.

  This can be easily fixed by having the Hyper-V Security Groups Driver
  create rules with protocol '58' instead. [1] These rules will also
  have to be stateless, as ICMP rules cannot be stateful on Hyper-V.

  This bug is causing the test
  tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os
  to fail on Hyper-V.

  [1] http://www.iana.org/assignments/protocol-numbers/protocol-
  numbers.xhtml

  Log: http://paste.openstack.org/show/301866/

  Security Groups: http://paste.openstack.org/show/301870/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1466547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231101] Re: Implicit mappings should be removed from the Hyper-V agent for consistency with the ML2 driver

2015-09-30 Thread Armando Migliaccio
** Tags removed: hyper-v

** Also affects: networking-hyperv
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1231101

Title:
  Implicit mappings should be removed from the Hyper-V agent for
  consistency with the ML2 driver

Status in networking-hyperv:
  New

Bug description:
  The Grizzly version of the Hyper-V agent provides a zeroconf option to
  automatically map physical networks on vswitches with the same name if
  a mapping is not found.

  This is not compatible with the way in which the ML2 plugin works and
  since implicit mappings are a scarcely used feature it can be removed
  from the Hyper-V agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1231101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298034] Re: Nova Hyper-V driver fails occasionally with a x_wmi_uninitialised_thread exception

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-hyperv
   Importance: Undecided
   Status: New

** Tags removed: hyper-v

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298034

Title:
  Nova Hyper-V driver fails occasionally with a
  x_wmi_uninitialised_thread exception

Status in networking-hyperv:
  New
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  The Nova Hyper-V driver can fail occasionally with:

  x_wmi_uninitialised_thread ("WMI returned a syntax error: you're
  probably running inside a thread without first calling
  pythoncom.CoInitialize[Ex]"

  http://64.119.130.115/82904/14/Hyper-V_logs/hv-compute1/neutron-
  hyperv-agent.log.gz

  Each thread that uses COM needs to initialize COM by calling
  pythoncom.CoInitialize or pythoncom.CoInitializeEx.

  Error stack trace:

  http://64.119.130.115/82904/14/Hyper-V_logs/hv-compute1/neutron-
  hyperv-agent.log.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1298034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501539] [NEW] Facet options should limit to max amount

2015-09-30 Thread Travis Tripp
Public bug reported:

The facets api returns available options for the given facet.  However,
there needs to be a maximum number of options allowed or it should just
return the facet without options.  Ideally, this would have a default
(possibly configurable), but could be specified as a query param.

** Affects: searchlight
 Importance: Undecided
 Status: New

** Also affects: searchlight
   Importance: Undecided
   Status: New

** No longer affects: horizon

** Changed in: searchlight
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501539

Title:
  Facet options should limit to max amount

Status in OpenStack Search (Searchlight):
  New

Bug description:
  The facets api returns available options for the given facet.
  However, there needs to be a maximum number of options allowed or it
  should just return the facet without options.  Ideally, this would
  have a default (possibly configurable), but could be specified as a
  query param.

To manage notifications about this bug go to:
https://bugs.launchpad.net/searchlight/+bug/1501539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435638] Re: ofagent: occasional test_server_connectivity_rebuild failure

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435638

Title:
  ofagent: occasional test_server_connectivity_rebuild failure

Status in networking-ofagent:
  Fix Committed

Bug description:
  tempest test_server_connectivity_rebuild occasionally fail with
  ofagent.

  it turned out to be a problem in port monitoring.
  when a port is removed and then recreated with the same name in an agent's 
polling interval,
  it can miss the update and failed to update the corresponding flows.

  note1: flow updates are necessary because the switch is free to choose a 
different ofport number for the recreated port.
  note2: it isn't a problem for OVS-agent because it uses NORMAL action and 
doesn't have flows for a specific ofport.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1435638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387394] Re: apic driver doesn't register standard l3 plugin ports

2015-09-30 Thread Armando Migliaccio
** Tags removed: apic ml2

** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1387394

Title:
  apic driver doesn't register standard l3 plugin ports

Status in networking-cisco:
  New

Bug description:
  When running APIC ML2 driver with standard L3 plugin, the ports
  associated with the router interface should be registered on the
  backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1387394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371808] Re: For Arista ML2 Driver, block port create/update operations

2015-09-30 Thread Armando Migliaccio
** Tags removed: api arista ml2

** Also affects: networking-arista
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371808

Title:
  For Arista ML2 Driver, block port create/update operations

Status in networking-arista:
  New

Bug description:
  If Arista EOS is not in ready state (i.e. not fully initialized) do
  not accept any create or update operation. Simply reject the
  create/update_xxx_precommit operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1371808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320747] Re: "Error: Port-profile is not configured to accept switching commands" in brocade plugin

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-brocade
   Importance: Undecided
   Status: New

** Tags removed: brocade

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1320747

Title:
  "Error: Port-profile is not configured to accept switching commands"
  in brocade plugin

Status in networking-brocade:
  New

Bug description:
  Creating a network fails on a Brocade 6710 switch running NOS 4.10a:

  2014-05-16 11:41:39.096 28138 TRACE
  neutron.plugins.brocade.NeutronPlugin RPCError:
  NSM_ERR_DCM_APPM_VLAN_PROFILE_MODE_INVALID | %Error: Port-profile is
  not configured to accept switching commands

  The problem is that apparently the NETCONF command to set the port
  into port-profile mode has changed. The current XML template
  (neutron/plugins/brocade/nos/nctemplates.py) (not working):

  # Configure L2 mode for VLAN sub-profile (port_profile_name)
  CONFIGURE_L2_MODE_FOR_VLAN_PROFILE = """
  
  
  {name}
  
  
  
  
  
  """

  The correct template (working):

  # Configure L2 mode for VLAN sub-profile (port_profile_name)
  CONFIGURE_L2_MODE_FOR_VLAN_PROFILE = """
  
  
  {name}
  
  
  
  
  
  
  
  """

  Fixing the template resolves the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1320747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269124] Re: ML2 unit test coverage - mech_arista

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-arista
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Tags removed: arista

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269124

Title:
  ML2 unit test coverage - mech_arista

Status in networking-arista:
  New

Bug description:
  From tox -e cover neutron.tests.unit.ml2; coverage report -m:

  neutron/plugins/ml2/drivers/mech_arista/mechanism_arista   414244 
98 8536%   48-52, 62-64, 78-86, 110, 133, 184, 205-217, 257-258, 
266-271, 284, 327-329, 346-457, 460-463, 466-469, 495-497, 516-535, 544-548, 
556-576, 591-603, 632-668, 678-682, 690-733, 758-783, 797-798, 801-806, 
810-811, 815-824
  neutron/plugins/ml2/drivers/mech_arista/db 152 36 
18  974%   70, 85, 118-120, 213-218, 258, 352-368, 373-381, 
398-402, 405-406, 410-411, 415-417, 421-423
  neutron/plugins/ml2/drivers/mech_arista/config   3  0 
 0  0   100%   
  neutron/plugins/ml2/drivers/mech_arista/exceptions   6  0 
 0  0   100%   
  neutron/plugins/ml2/drivers/mech_arista/__init__ 0  0 
 0  0   100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1269124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207139] Re: Add configuration for ignorable exceptions from Cisco Nexus switch

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1207139

Title:
  Add configuration for ignorable exceptions from Cisco Nexus switch

Status in networking-cisco:
  New

Bug description:
  When the Cisco nexus plugin attempts certain configuration operations
  on the Nexus switch, the Nexus switch may return errors (depending on
  the version of Nexus HW and SW) which are benign.  When these
  configuration errors are generated, the ncclient (NetConf Client)
  module, which is used by the Cisco plugin to communicate with the
  Nexus switch, reports these errors as a generic configuration
  exception, but with a string representation which includes a
  description of the  specific error condition.

  For example, some versions of the Nexus 3K will not allow state changes for 
what those switches consider the extended VLAN range (1006-4094), including 
these state-change config commands:
  active
  no shutdown
  When a Nexus 3K reports errors for these state-change commands, the ncclient 
module will report a configuration failure exception which includes these 
strings in their string representations:
  "Can't modify state for extended"
  "Command is only allowed on VLAN"

  The Cisco Nexus plugin currently looks for and ignores any config
  exceptions with the above error strings whenever the 'active' and 'no
  shutdown' commands are sent to the Nexus switch. Admittedly, it's a
  bit ugly for the plugin to be matching strings for this purpose,
  instead of specific exception types. However, the current ncclient
  module only gives us the description strings on which to match. The
  ncclient module is external to OpenStack, and it may not be possible
  to convince the ncclient community to modify their exception
  generation for something that may be considered Cisco-specific.

  It's possible that these error strings could be modified in the
  future, or that there are other errors reported or other config
  operations which also need to be ignored. In order to handle this, we
  need to add configuration for the Cisco Nexus switch which will allow
  us to define which errors can be ignored and for which configuration
  operations.  The default list for this configuration should be the
  above error conditions (for VLAN state-change commands).  If an
  explicit list is provided in the plugin config, then that list should
  override the default list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1207139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1205172] Re: Eliminate 1-line static _should_call_create_net() in Cisco plugin

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1205172

Title:
  Eliminate 1-line static _should_call_create_net() in Cisco plugin

Status in networking-cisco:
  New

Bug description:
  Perform some miscellaneous cleanup in the Cisco Nexus plugin:
  - Eliminate the single-line static method _should_call_create_net(). This 
method was originally added
 as a unit test hook (it was a method which could be mocked to force 
creation of a network
 regardless of instance ID and device_owner in unit testing). This can be 
eliminated by first
 changing the unit tests so that an instance ID is set in unit tests. Once 
an instance ID is
 set, the check in _should_call_create_net() can be moved to the calling 
method (create_port()).
  - Eliminate the try/except in nexus_db_v2.py::get_port_switch_bindings(). 
This try/except is
 designed to catch a NoResultFound exception for a call to all(). However, 
If there are no
  matching entries, all() will simply return None, so this try/except has 
no value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1205172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180944] Re: Internal Vlan assignment issue in Metaplugin while provisioning network

2015-09-30 Thread Armando Migliaccio
Metaplugin is dead.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Nachi Ueno (nati-ueno) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1180944

Title:
  Internal Vlan assignment issue in Metaplugin while provisioning
  network

Status in neutron:
  Invalid

Bug description:
  Hi ,

  When creating the subnet for networks , few subnet's DHCP tap interfaces were 
being set to  internal Vlan tag  4095 ,which is a dead vlan tag and 
inappropriate. But its not the same case for other networks where the Vlan tag 
was between 1 to 4094 the valid vlan range. 
  So wherever the 4095 is not assigned the network connectivity is fine and the 
Vm's are able to get the DHCP ip's.

  This happens randomly.  I have isolated this by testing OVS plugin
  first and ensured no issues in network provisioning, then I tested the
  metaplugin with OVS, encountered the mentioned issue.

  Sample output:

  Bridge br-int
  Port "tapc5b0d0bd-e6"-- dhcp tap
  tag: 4095
  Interface "tapc5b0d0bd-e6"

  
  Please let me know any further logs are needed.

  
  -Ashok

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1180944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255005] Re: Midonet Plugin performance issues during large amount of ARPs

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255005

Title:
  Midonet Plugin performance issues during large amount of ARPs

Status in networking-midonet:
  New

Bug description:
  If traffic is generated for a large amount of IPs within a range at
  roughly the same time, Midonet will attempt to handle the traffic by
  ARPing, and experience performance problems.

  In order to prevent this, Midonet can blackhole all traffic that is
  not destined for an IP that is assigned to a port or being used as a
  floating IP.

  This can be done by adding a 'blackhole' route to the provider router
  when the external network is created, and adding individual routes for
  ports on the external network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1255005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231790] Re: Midonet security rules have multiple problems

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1231790

Title:
  Midonet security rules have multiple problems

Status in networking-midonet:
  New

Bug description:
  The Midonet plugin's implementation of security rules has several problems, 
including:
  -Egress chains do not have a rule to drop traffic not handled by any security 
rules.
  -Changing a port's security groups does not update the port's rule chains.
  -The IP spoofing rules do not support IPv6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1231790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244025] Re: Remote security group criteria don't work in Midonet plugin

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244025

Title:
  Remote security group criteria don't work in Midonet plugin

Status in networking-midonet:
  New
Status in neutron havana series:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  When creating a security rule that specifies a remote security group
  (rather than a CIDR range), the Midonet plugin does not enforce this
  criterion. With an egress rule, for example, one of the criteria for a
  particular rule may be that only traffic to security group A will be
  allowed out. This criterion is ignored, and traffic will be allowed
  out regardless of the destination security group, provided that it
  conforms to the rule's other criteria.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1244025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501556] [NEW] periodic task for erroring build timeouts tries to set error state on deleted instances

2015-09-30 Thread Sam Morrison
Public bug reported:

In our nova-compute logs we get a ton of these messages over and over

2015-10-01 11:01:54.781 30811 WARNING nova.compute.manager [req-
f61f4f85-72e7-481b-a8a3-90551bdc4b58 - - - - -] [instance: 75f733b5
-842e-4bde-9570-efa2735e6f12] Instance build timed out. Set to error
state.

Upon looking in the DB they are all deleted

select deleted_at, deleted, vm_state, task_state from instances where uuid = 
'75f733b5-842e-4bde-9570-efa2735e6f12';
+-+-+--++
| deleted_at  | deleted | vm_state | task_state |
+-+-+--++
| 2015-08-17 00:47:18 |   12283 | building | deleting   |
+-+-+--++

We have instance_build_timeout = 3600

I think _check_instance_build_time in compute.manager needs to filter on
deleted instances but there may be a reason it checks deleted instances
too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501556

Title:
  periodic task for erroring build timeouts tries to set error state on
  deleted instances

Status in OpenStack Compute (nova):
  New

Bug description:
  In our nova-compute logs we get a ton of these messages over and over

  2015-10-01 11:01:54.781 30811 WARNING nova.compute.manager [req-
  f61f4f85-72e7-481b-a8a3-90551bdc4b58 - - - - -] [instance: 75f733b5
  -842e-4bde-9570-efa2735e6f12] Instance build timed out. Set to error
  state.

  Upon looking in the DB they are all deleted

  select deleted_at, deleted, vm_state, task_state from instances where uuid = 
'75f733b5-842e-4bde-9570-efa2735e6f12';
  +-+-+--++
  | deleted_at  | deleted | vm_state | task_state |
  +-+-+--++
  | 2015-08-17 00:47:18 |   12283 | building | deleting   |
  +-+-+--++

  We have instance_build_timeout = 3600

  I think _check_instance_build_time in compute.manager needs to filter
  on deleted instances but there may be a reason it checks deleted
  instances too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501558] [NEW] test_associate_already_associated_floating_ip fails in juno with ebtables kernel failure (nova-net)

2015-09-30 Thread Matt Riedemann
Public bug reported:

Just started seeing this show up in juno jobs:

http://logs.openstack.org/39/229639/2/check/gate-tempest-dsvm-full-
juno/a0bb0c0/logs/screen-n-net.txt.gz?level=TRACE#_2015-09-30_22_21_27_400

2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 393, in 
_associate_floating_ip
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
do_associate()
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 272, in inner
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 386, in do_associate
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
interface=interface)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 370, in do_associate
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
interface, fixed['network'])
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/l3.py", line 114, in add_floating_ip
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
l3_interface_id, network)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 784, in 
ensure_floating_forward
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
ensure_ebtables_rules(*floating_ebtables_rules(fixed_ip, network))
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 272, in inner
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 1649, in 
ensure_ebtables_rules
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
_execute(*cmd, run_as_root=True)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/linux_net.py", line 1229, in _execute
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
utils.execute(*cmd, **kwargs)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/utils.py", line 187, in execute
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher return 
processutils.execute(*cmd, **kwargs)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/processutils.py", line 222, in 
execute
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
cmd=sanitized_cmd)
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher 
ProcessExecutionError: Unexpected error while running command.
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf ebtables -t nat -I PREROUTING 
--logical-in br100 -p ipv4 --ip-src 10.1.0.7 ! --ip-dst 10.1.0.0/20 -j redirect 
--redirect-target ACCEPT
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher Exit code: 255
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher Stdout: u''
2015-09-30 22:21:27.400 24730 TRACE oslo.messaging.rpc.dispatcher Stderr: 
u"Unable to update the kernel. Two possible causes:\n1. Multiple ebtables 
programs were executing simultaneously. The 

[Yahoo-eng-team] [Bug 1384365] Re: Domain admin should be allowed to show their domain

2015-09-30 Thread Hidekazu Nakamura
*** This bug is a duplicate of bug 1480480 ***
https://bugs.launchpad.net/bugs/1480480

** This bug has been marked a duplicate of bug 1480480
   keystone v3 example policy file should allow domain admin to  get it's 
current domain

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1384365

Title:
  Domain admin should be allowed to show their domain

Status in Keystone:
  Incomplete

Bug description:
  When using the policy.v3cloudsample.json, a domain admin (possessing
  the 'admin' role with a domain scoped token) is not allowed to show
  their own domain.  This operation is restricted to the cloud admin:

"identity:get_domain": "rule:cloud_admin"

  The admin of a domain should be allowed to view/show their own domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1384365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381285] Re: Nexus driver keeps trying to configure VLAN at the switch

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381285

Title:
  Nexus driver keeps trying to configure VLAN at the switch

Status in networking-cisco:
  New

Bug description:
  OpenStack version: Juno

  Issue: Nexus driver keeps trying to configure VLAN at the switch after
  update_port_postcommit failed due to ssh connection issue

  Description:
  I used devstack to deploy Multi-Node OpenStack with Cisco Nexus ML2 plugin 
for VLAN network type.

  There’s a requirement to first ssh to the Nexus switch from the
  Controller node in order to set up the ssh credentials before
  launching VMs.

  I did not do that and caused the ssh exception and
  update_port_postcommit failure, resulting in the VLAN not configured
  at the Nexus switch.

  In the Controller’s screen-q-svc.log, it showed the Nexus driver repeatedly 
trying to configure the VLAN every 2+ seconds;
  and, after a few minutes, causing ssh failure to the Nexus switch from my 
MAC.  

  DANNCHOI-M-G07T:~ dannychoi$ ssh -l admin 172.22.191.67
  ssh_exchange_identification: Connection closed by remote host
  DANNCHOI-M-G07T:~ dannychoi$ ssh -l admin 172.22.191.67
  ssh_exchange_identification: Connection closed by remote host
  DANNCHOI-M-G07T:~ dannychoi$ ssh -l admin 172.22.191.67
  ssh_exchange_identification: Connection closed by remote host

  This continues until the VM is deleted; and after a few minutes, I can
  ssh to the Nexus switch from my MAC.

  Below is a snippet of the log that keeps repeating every 2+ seconds:

  2014-10-14 20:41:26.381 DEBUG neutron.context 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Arguments dropped when 
creating context: {u'project_name': None, u'tenant': None} from (pid=961) 
__init__ /opt/stack/neutron/neutron/context.py:83
  2014-10-14 20:41:26.382 DEBUG neutron.plugins.ml2.rpc 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Device 
0f9454bc-1a31-4743-9c98-9c42a084e673 up at agent ovs-agent-qa4 from (pid=961) 
update_device_up /opt/stack/neutron/neutron/plugins/ml2/rpc.py:149
  2014-10-14 20:41:26.402 DEBUG neutron.openstack.common.lockutils 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Got semaphore "db-access" 
from (pid=961) lock /opt/stack/neutron/neutron/openstack/common/lockutils.py:168
  2014-10-14 20:41:26.426 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.nexus_db_v2 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] add_nexusport_binding() 
called from (pid=961) add_nexusport_binding 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_db_v2.py:45
  2014-10-14 20:41:26.431 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.nexus_db_v2 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] get_nexusvlan_binding() 
called from (pid=961) get_nexusvlan_binding 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_db_v2.py:39
  2014-10-14 20:41:26.437 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Nexus: create & trunk vlan 
%s from (pid=961) _configure_switch_entry 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py:125
  2014-10-14 20:41:26.438 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.nexus_network_driver 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] NexusDriver: 

  
<__XML__MODE__exec_configure>
  

  <__XML__PARAM_value>300
  <__XML__MODE_vlan>

  dan-300

  

  


  

   from (pid=961) create_vlan 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_network_driver.py:123
  2014-10-14 20:41:26.438 DEBUG ncclient.transport.session 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None]  created: 
client_capabilities=['urn:ietf:params:netconf:capability:writable-running:1.0', 
'urn:ietf:params:netconf:capability:rollback-on-error:1.0', 
'urn:ietf:params:netconf:capability:validate:1.0', 
'urn:ietf:params:netconf:capability:confirmed-commit:1.0', 
'urn:ietf:params:netconf:capability:url:1.0?scheme=http,ftp,file,https,sftp', 
'urn:ietf:params:netconf:capability:candidate:1.0', 
'urn:ietf:params:netconf:capability:xpath:1.0', 
'urn:ietf:params:netconf:capability:startup:1.0', 
'urn:ietf:params:xml:ns:netconf:base:1.0', 
'urn:liberouter:params:netconf:capability:power-control:1.0urn:ietf:params:netconf:capability:interleave:1.0']
 from (pid=961) __init__ 
/usr/local/lib/python2.7/dist-packages/ncclient/transport/session.py:42
  2014-10-14 20:41:26.438 DEBUG ncclient.transport.session 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] 

[Yahoo-eng-team] [Bug 1413319] Re: Traceback when deleting VxLAN networks using N1kv plugin

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413319

Title:
  Traceback when deleting VxLAN networks using N1kv plugin

Status in networking-cisco:
  New

Bug description:
  Traceback when deleting VxLAN networks using N1kv plugin

  OpenStack Version: Kilo

  $ nova-manage version
  2015.1

  $ neutron --version
  2.3.10

  Steps to Repo:
1. Setup Testbed with multi node CSR/N1Kv
2. Create an overlay Network Profile
3. Create a network using overlay network profile.
4. Attach to Router (CSR)
5. Create/Attach VMs
6. Delete VMs
7. Delete port to Router
8. Delete network
9. Check log for errors

  
   
  2015-01-20 11:22:17.447 DEBUG neutron.plugins.cisco.n1kv.n1kv_neutron_plugin 
[req-ba1e4506-b97b-4a47-ad0d-7070df36af33 admin 
3476e663d2084dcca4533e784dc37d9d] Get network: 
bfa4b652-53e8-4c58-ba21-9cc44a5407ba from (pid=13573) get_network 
/opt/stack/neutron/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py:1058
  2015-01-20 11:22:17.463 DEBUG neutron.context 
[req-f3191202-1890-47c3-a384-9100f074c748 None None] Arguments dropped when 
creating context: {u'project_name': None, u'tenant': None} from (pid=13573) 
__init__ /opt/stack/neutron/neutron/context.py:84
  2015-01-20 11:22:17.463 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-f3191202-1890-47c3-a384-9100f074c748 None None] Network 
bfa4b652-53e8-4c58-ba21-9cc44a5407ba requested from qa1 from (pid=13573) 
get_network_info /opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py:122
  2015-01-20 11:22:17.464 DEBUG neutron.plugins.cisco.n1kv.n1kv_neutron_plugin 
[req-f3191202-1890-47c3-a384-9100f074c748 None None] Get network: 
bfa4b652-53e8-4c58-ba21-9cc44a5407ba from (pid=13573) get_network 
/opt/stack/neutron/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py:1058
  2015-01-20 11:22:17.485 DEBUG neutron.plugins.cisco.n1kv.n1kv_neutron_plugin 
[req-ba1e4506-b97b-4a47-ad0d-7070df36af33 admin 
3476e663d2084dcca4533e784dc37d9d] Get network: 
bfa4b652-53e8-4c58-ba21-9cc44a5407ba from (pid=13573) get_network 
/opt/stack/neutron/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py:1058
  2015-01-20 11:22:17.607 DEBUG neutron.plugins.cisco.n1kv.n1kv_neutron_plugin 
[req-ba1e4506-b97b-4a47-ad0d-7070df36af33 admin 
3476e663d2084dcca4533e784dc37d9d] _send_delete_network_request: 
bfa4b652-53e8-4c58-ba21-9cc44a5407ba from (pid=13573) 
_send_delete_network_request 
/opt/stack/neutron/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py:730
  2015-01-20 11:22:17.610 ERROR oslo.messaging.rpc.dispatcher 
[req-f3191202-1890-47c3-a384-9100f074c748 None None] Exception during message 
handling: Network Binding for network bfa4b652-53e8-4c58-ba21-9cc44a5407ba 
could not be found.
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
137, in _dispatch_and_reply
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
180, in _dispatch
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
126, in _do_dispatch
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 125, in 
get_network_info
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher network = 
plugin.get_network(context, network_id)
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py", line 
1062, in get_network
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher 
self._extend_network_dict_member_segments(context, net)
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py", line 
540, in _extend_network_dict_member_segments
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher network['id'])
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/plugins/cisco/db/n1kv_db_v2.py", line 273, in 
get_network_binding
  2015-01-20 11:22:17.610 TRACE oslo.messaging.rpc.dispatcher raise 
c_exc.NetworkBindingNotFound(network_id=network_id)
  

[Yahoo-eng-team] [Bug 1399453] Re: Nexus VXLAN gateway: VM with 2 interfaces to the same subnet delete issues

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399453

Title:
  Nexus VXLAN gateway: VM with 2 interfaces to the same subnet delete
  issues

Status in networking-cisco:
  New

Bug description:
  With Nexus VXLAN gateway, there are delete issues with the last VM
  that has 2 interfaces to the same subnet.

  1. When one interface is deleted, all the VLAN/VNI mapping
  configurations are deleted at the Nexus switch.

  2. When the last interface or the VM is deleted, traceback is logged
  in screen-q-svc.log.

  2014-12-03 18:03:38.433 ERROR neutron.plugins.ml2.managers 
[req-dda788e5-759f-4caf-81f6-a31b43025ede demo 
f0fd7da7d2874c1590a0092aab9014c3] Mechanism driver 'cisco_nexus' failed in 
delete_port_postcommit
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last):
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 299, in 
_call_on_drivers
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py",
 line 400, in delete_port_postcommit
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
self._delete_nve_member) if vxlan_segment else 0
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py",
 line 325, in _port_action_vxlan
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers func(vni, 
device_id, mcast_group, host_id)
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py",
 line 155, in _delete_nve_member
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers vni)
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_network_driver.py",
 line 253, in delete_nve_member
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
self._edit_config(nexus_host, config=confstr)
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_network_driver.py",
 line 80, in _edit_config
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers raise 
cexc.NexusConfigFailed(config=config, exc=e)
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers NexusConfigFailed: 
Failed to configure Nexus: 
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   
<__XML__MODE__exec_configure>
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers nve1
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
<__XML__MODE_if-nve>
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers no 
member vni 9000
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 

  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   

  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 

  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers   
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers . Reason: ERROR: 
VNI delete validation failed
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers .
  2014-12-03 18:03:38.433 TRACE neutron.plugins.ml2.managers 
  2014-12-03 18:03:38.435 ERROR neutron.plugins.ml2.plugin 
[req-dda788e5-759f-4caf-81f6-a31b43025ede demo 
f0fd7da7d2874c1590a0092aab9014c3] mechanism_manager.delete_port_postcommit 
failed for port da89ec67-e825-4a52-8dfa-6a2556624a9e

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1399453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379583] Re: Make N1kv plugin net-delete more consistent w/ core

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379583

Title:
  Make N1kv plugin net-delete more consistent w/ core

Status in networking-cisco:
  New

Bug description:
  Currently the N1kV plugin has a check for subnets on network delete
  that reports an error when trying to delete the network if any subnets
  are present. This is a slight deviation from core plugin behavior
  which simply removes any associated subnets on network delete. Ideally
  we should enhance this behavior to be more consistent with core by
  having the plugin trigger the needed delete for any associated subnets
  on network delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1379583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247968] Re: ML2 Cisco Nexus MD: Port bug #1246080

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1247968

Title:
  ML2 Cisco Nexus MD: Port bug #1246080

Status in networking-cisco:
  New

Bug description:
  Port cisco plugin bug: https://bugs.launchpad.net/neutron/+bug/1246080
  to ml2 cisco nexus mechanism driver.
  Bugfix for cisco plugin already under review: 
https://review.openstack.org/#/c/54612/2

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1247968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223751] Re: update_device_up RPC call back missing in Brocade neutron plugin

2015-09-30 Thread Armando Migliaccio
** Tags removed: brocade

** Also affects: networking-brocade
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223751

Title:
  update_device_up RPC call back missing in Brocade neutron plugin

Status in networking-brocade:
  New

Bug description:
  update_device_up RPC call back missing in Brocade neutron plugin Hence Linux 
Bridge plugin agent throws 
  exception while creating VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1223751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495440] Re: Fwaas/CLI: Can not delete multiple firewall rule by passing multiple firewall rule id

2015-09-30 Thread Armando Migliaccio
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Tags removed: client

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495440

Title:
  Fwaas/CLI:  Can not delete multiple firewall rule by passing multiple
  firewall rule id

Status in neutron:
  New
Status in python-neutronclient:
  New

Bug description:
  While trying to delete multiple firewall rule using CLI by passing
  firewall rule multiple times, it deletes only the first firewall Rule
  id

  stack@hdp-001:~$ neutron
  (neutron) firewall-rule-list
  
+--+-++-+-+
  | id   | name| firewall_policy_id 
| summary | enabled |
  
+--+-++-+-+
  | 8c4ea5c6-a6e4-43ab-a503-0a2265119238 | test1491637 |
| TCP,| True|
  |  | |
|  source: none(none),| |
  |  | |
|  dest: none(none),  | |
  |  | |
|  allow  | |
  | b8c1c061-8f92-482d-94d3-678f42c7ccd7 | rayrafw2|
| ICMP,   | True|
  |  | |
|  source: none(none),| |
  |  | |
|  dest: none(none),  | |
  |  | |
|  allow  | |
  | ba35dde7-8b07-4ba1-8338-496962c83dbc | testrule1491637 |
| UDP,| True|
  |  | |
|  source: 10.25.10.2/32(80), | |
  |  | |
|  dest: none(none),  | |
  |  | |
|  deny   | |
  
+--+-++-+-+
  (neutron) firewall-rule-delete 8c4ea5c6-a6e4-43ab-a503-0a2265119238 
b8c1c061-8f92-482d-94d3-678f42c7ccd7
  Deleted firewall_rule: 8c4ea5c6-a6e4-43ab-a503-0a2265119238
  (neutron) firewall-rule-list
  
+--+-++-+-+
  | id   | name| firewall_policy_id 
| summary | enabled |
  
+--+-++-+-+
  | b8c1c061-8f92-482d-94d3-678f42c7ccd7 | rayrafw2|
| ICMP,   | True|
  |  | |
|  source: none(none),| |
  |  | |
|  dest: none(none),  | |
  |  | |
|  allow  | |
  | ba35dde7-8b07-4ba1-8338-496962c83dbc | testrule1491637 |
| UDP,| True|
  |  | |
|  source: 10.25.10.2/32(80), | |
  |  | |
|  dest: none(none),  | |
  |  | |
|  deny   | |
  
+--+-++-+-+
  (neutron)

  It  will be better if we can delete multiple firewall rule by passing
  multiple firewall rule id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368087] Re: Add provider networkextended configuration options for ml2 nexus

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368087

Title:
  Add provider networkextended configuration options for ml2 nexus

Status in networking-cisco:
  New

Bug description:
  Current implementation of ml2 nexus plugin does not have support for
  'provider network' extension.  This is supported in the monolithic
  plugin. Hence migration of existing deployments is not possible
  without this being implemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1368087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376981] Re: NSX plugin security group rules OVS flow explosion

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376981

Title:
  NSX plugin security group rules OVS flow explosion

Status in vmware-nsx:
  New

Bug description:
  In our clouds running Havana with VMware NSX, we often see an
  explosion of OVS flows when there are many complex security group
  rules. Specifically when the rules involve remote_group_id (security
  profile in NSX), there are OVS flow rules created for every pair of
  VMs belonging to the tenant resulting in O(n^2) rules. In large
  deployments, this results in severe performance issues when the number
  of OVS flow rules in gets into millions. In addition, this results in
  an exponential increase in memory consumption on NSX controllers.

  Nicira plugin should make an attempt at summarizing the security group
  rules created by the users, so that it results in efficient
  representation on OVS as well as reduces memory consumption on NSX
  controllers.

  Examples:

  1. With every security group, Nicira automatically adds a hidden
  (hidden = not stored in Neutron) security group rule to allow ingress
  IPv4  UDP traffic on DHCP port 68. If a user creates exactly the same
  rule, then a duplicate rule is created and maintained by NSX
  controllers and pushed down to OVS on hypervisors. The other case is
  even if the user creates a broader rule allowing UDP traffic on all
  ports, NSX maintains both the broader rule and the hidden DHCP rule.
  In this case, there is no need to have the additional more specific
  DHCP hidden rule.

  2. We have seen cases where users have created both a broader rule to
  allow UDP/TCP/ICMP traffic from outside and additional rules to
  restrict the same traffic to their tenant VMs. In this case, the self-
  referential rules significantly increase OVS flows and can be
  completely avoided.

  Ideally, NVP plugin (nvplib.py in Havana) should summarize the rules
  in the security group before submitting them NSX controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1376981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330590] Re: ML2 Cisco Nexus MD: Support Management VLANs

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330590

Title:
  ML2 Cisco Nexus MD: Support Management VLANs

Status in networking-cisco:
  New

Bug description:
  Cisco Nexus MD configures only tenant VLANs on the compute node ToR
  interfaces. It should be able to configure OpenStack management VLANs
  as well on ToR.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1330590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330594] Re: ML2 Cisco Nexus MD: Support DHCP Configuration

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330594

Title:
  ML2 Cisco Nexus MD: Support DHCP Configuration

Status in networking-cisco:
  New

Bug description:
  Network Node ToR interface configuration (where DHCP service is
  running) is not handled by the Cisco Nexus MD.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1330594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244777] Re: Cisco nexus plugin port-binding table needs to be redesigned

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244777

Title:
  Cisco nexus plugin port-binding table needs to be redesigned

Status in networking-cisco:
  New

Bug description:
  In the current implementation, the table cisco_nexusport_bindings uses an 
automatically incremented field as its primary key. 
  cisco_nexusport_bindings:
  id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
  port_id = sa.Column(sa.String(255))
  vlan_id = sa.Column(sa.Integer, nullable=False)
  switch_ip = sa.Column(sa.String(255))
  instance_id = sa.Column(sa.String(255))

  There should be one-to-one mapping between entries in the table and
  neutron ports. However, based on the above definition, such
  relationship is not properly maintained. Instead, neutron port id
  should be used as the table's primary key. Such change will improve
  the plugin design and code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1244777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330598] Re: ML2 Cisco Nexus MD: Need UT For Multi-Switch Configuration

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330598

Title:
  ML2 Cisco Nexus MD: Need UT For Multi-Switch Configuration

Status in networking-cisco:
  New

Bug description:
  Create Unit Test that verifies multiple Cisco Nexus switch
  configuration using same host name defined under each switch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1330598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324764] Re: Add more comments to VPN plugin and Cisco svc driver

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324764

Title:
  Add more comments to VPN plugin and Cisco svc driver

Status in networking-cisco:
  New

Bug description:
  Clarify the plugin and service driver flow with more comments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1324764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415672] Re: Cisco Nexus ML2 mechanism fails on GRE networks

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415672

Title:
  Cisco Nexus ML2 mechanism fails on GRE networks

Status in networking-cisco:
  New

Bug description:
  There is a problem with Cisco Nexus ML2 mechanism and GRE networks.
  Function _port_action fails with NexusMissingRequiredFields if the
  port's network is not VLAN. So, this prevents port binding, and leads
  to completely non-working GRE networks.

  My proposed solution is to add a check in beginning of _port_action:

  if (segment and segment[api.NETWORK_TYPE] != p_const.TYPE_VLAN):
  return

  There is a similar check in _get_vlanid, but it is not enough

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1415672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223402] Re: Cisco nexus plugin fails to create vlan on a previously used switch

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223402

Title:
  Cisco nexus plugin fails to create vlan on a previously used switch

Status in networking-cisco:
  New

Bug description:
  When reconfiguring quantum from scratch with a cisco nexus switch that
  was previously used in a quantum setup, creating a VLAN might fail
  with:

  QuantumClientException: Failed to configure Nexus: 
   
  
<__XML__MODE__exec_configure>
  

  <__XML__PARAM_value>582
  <__XML__MODE_vlan>

  q-582

  

  

  

  . Reason: ERROR: VLAN with the same name exists

  The workaround for this is pretty easy. Just delete (even renaming
  seems to be enough) all the vlans that quantum is supposed to manage
  on the switch using the configuration console.

  But couldn't the nexus plugin also simply overwrite the exisiting VLAN
  in case it already exists on the switch but not it the database?

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1223402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375519] Re: Cisco N1kv: Enable quota support in stable/icehouse

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375519

Title:
  Cisco N1kv: Enable quota support in stable/icehouse

Status in networking-cisco:
  New
Status in neutron:
  In Progress
Status in neutron icehouse series:
  New

Bug description:
  With the quotas table being populated in stable/icehouse, the N1kv
  plugin should be able to support quotas. Otherwise VMs end up in error
  state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1375519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1181373] Re: Metaplugin faces issues with Linuxbridge- Quantum-server not starting

2015-09-30 Thread Armando Migliaccio
Mataplugin is dead.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1181373

Title:
  Metaplugin faces issues with Linuxbridge- Quantum-server not starting

Status in neutron:
  Invalid

Bug description:
  Hi,

  When I tried to validate with Linuxbridge and Metaplugin I got the
  below weird error in quantum server.log .  quantum-server is not
  starting.  Looks like the core_plugin(metaplugin) doesn't seem to be
  loading . The same behavior was observed with another setup as well.

   Note: Before doing the testing , Linuxbridge was working perfectly.

   2013-03-13 20:22:28 INFO [quantum.openstack.common.rpc.common] Connected 
to AMQP server on localhost:5672
   2013-03-13 20:22:28ERROR [quantum.openstack.common.rpc.common] Returning 
exception 'NoneType' object is not callable to caller
   2013-03-13 20:22:28ERROR [quantum.openstack.common.rpc.common] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py", line 
430, in _process_data\nrval = self.proxy.dispatch(ctxt, version, method, 
**args)\n', '  File "/usr/lib/python2.7/dist-packages/quantum/common/rpc.py", 
line 43, in dispatch\nquantum_ctxt, version, method, **kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/dispatcher.py", 
line 133, in dispatch\nreturn getattr(proxyobj, method)(ctxt, **kwargs)\n', 
'  File 
"/usr/lib/python2.7/dist-packages/quantum/db/securitygroups_rpc_base.py", line 
141, in security_group_rules_for_devices\nport = 
self.get_port_from_device(device)\n', '  File 
"/usr/lib/python2.7/dist-packages/quantum/plugins/openvswitch/ovs_quantum_plugin.py",
 line 84, in get_port_from_device\nport = 
ovs_db_v2.get_port_from_device(device)\n', '  File 
"/usr/lib/python2.7/dist-packages/quantu
 m/plugins/openvswitch/ovs_db_v2.py", line 316, in get_port_from_device\n
LOG.debug(_("get_port_with_securitygroups() called:port_id=%s"), port_id)\n', 
"TypeError: 'NoneType' object is not callable\n"]
   2013-03-13 20:22:28 INFO [quantum.openstack.common.rpc.common] Connected 
to AMQP server on localhost:5672
   2013-03-13 20:22:28ERROR [quantum.openstack.common.rpc.common] Returning 
exception sys.path must be a list of directory names to caller
   2013-03-13 20:22:28ERROR [quantum.openstack.common.rpc.common] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py", line 
430, in _process_data\nrval = self.proxy.dispatch(ctxt, version, method, 
**args)\n', '  File "/usr/lib/python2.7/dist-packages/quantum/common/rpc.py", 
line 43, in dispatch\nquantum_ctxt, version, method, **kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/dispatcher.py", 
line 133, in dispatch\nreturn getattr(proxyobj, method)(ctxt, **kwargs)\n', 
'  File "/usr/lib/python2.7/dist-packages/quantum/db/agents_db.py", line 167, 
in report_state\ntime = timeutils.parse_strtime(time)\n', '  File 
"/usr/lib/python2.7/dist-packages/quantum/openstack/common/timeutils.py", line 
65, in parse_strtime\nreturn datetime.datetime.strptime(timestr, fmt)\n', 
'RuntimeError: sys.path must be a list of directory names\n']

  Pasting the config files in http://paste.openstack.org/show/37410/

  -Ashok

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1181373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257960] Re: nvp: metadata network not created

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257960

Title:
  nvp: metadata network not created

Status in vmware-nsx:
  New

Bug description:
  In some cases the procedure for setting up the metadata access network
  for a router does not appear to work smoothly.

  - The metadata router port is created successfully
  - The metadata network however is not created
  - In some cases (not all) an extra unexpected port has been observed on the 
network with the subnet being attached to a router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1257960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1225035] Re: nicira plugin - metadata ops in delete_router might trigger eventlet deadlock

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1225035

Title:
  nicira plugin - metadata ops in delete_router might trigger eventlet
  deadlock

Status in vmware-nsx:
  New

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/nicira/NeutronPlugin.py#L1540

  delete_router calls handle_router_metadata_access.
  The latter method creates 1 network, 1 subnet, and attaches that subnet to 
the router, performing db and nvp operations.
  This results in a very long transaction which can trigger eventlet deadlocks 
which have already been observed.

  However, this is unlikely to happen in practice as router deletion is
  not allowed until the last interface has been removed. This means
  handle_router_metadata_access is likely to be  a no-op.

  while a simple fix might be just removing the metadata network mgmt
  code, it might be worth keeping it there in case the logic for router
  deletion might change in the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1225035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1204686] Re: Nicira unit tests should not rely on 'fake API client' data

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1204686

Title:
  Nicira unit tests should not rely on 'fake API client' data

Status in vmware-nsx:
  New

Bug description:
  Several Nicira unit tests verify the result of NVP operations leveraging data 
stored in the fake nvp api client.
  Due to parallel testing, other threads might be operating on the same data 
structure, thus adding/removing data which might cause failures in unit tests 
asserting on the number of items in the dict.

  Mitigation strategies include:
  1 - avoid usage of these dicts at all, whenever possible.
  2 - consider not using the fake api client at all, and mock nvplib calls 
instead
  3 - always address peculiar elements in the dict, do not make assertions on 
the whole dictionary (such as len).

  Long term strategy is to not use anymore the fake nvp api client, at least in 
the plugin tests.
  nvplib calls might just be mocked, considering that nvplib unit tests are in 
place
  (This is not in the scope of this bug report

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1204686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262156] Re: nicira: _update_fip_assoc triggers eventlet deadlock

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262156

Title:
  nicira: _update_fip_assoc triggers eventlet deadlock

Status in vmware-nsx:
  New

Bug description:
  in neutron/l3_db.py _update_fip_assoc is executed from within a
  database transaction.

  The same routine for the nvp plugin executes backend operations such
  as updating the nat rules and ip addresses on the nvp platform, thus
  potentially triggering the deadlock with another mysql transaction
  such as the state report from an agent or a concurrent request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1262156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237194] Re: Router does not become active in tests/unit/nicira/test/edge_router.py

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237194

Title:
  Router does not become active in tests/unit/nicira/test/edge_router.py

Status in vmware-nsx:
  New

Bug description:
  http://logs.openstack.org/96/49996/4/gate/gate-neutron-
  python26/3b7ce12/console.html

  2013-10-08 23:05:23.791 | Traceback (most recent call last):
  2013-10-08 23:05:23.791 |   File 
"/home/jenkins/workspace/gate-neutron-python26/neutron/tests/unit/nicira/test_edge_router.py",
 line 194, in test_router_create
  2013-10-08 23:05:23.791 | self.assertEqual(res['router'][k], v)
  2013-10-08 23:05:23.791 |   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 322, in assertEqual
  2013-10-08 23:05:23.791 | self.assertThat(observed, matcher, message)
  2013-10-08 23:05:23.792 |   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 417, in assertThat
  2013-10-08 23:05:23.792 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-08 23:05:23.792 | MismatchError: u'PENDING_CREATE' != 'ACTIVE'

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1237194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256238] Re: Add Error handling to NVP advanced LBaaS/FWaaS

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1256238

Title:
  Add Error handling to NVP advanced LBaaS/FWaaS

Status in vmware-nsx:
  New

Bug description:
  Need to add error handling to NVP advanced LBaaS/FWaas, including
  refactoring duplicate codes and implementing "TODO", to enhance
  Service stability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1256238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229145] Re: Nicira plugin - allow for punctual state sync on list operations too

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1229145

Title:
  Nicira plugin - allow for punctual state sync on list operations too

Status in vmware-nsx:
  New

Bug description:
  Currently the nicira plugin always asynchronously synchronizes
  operational status for resources, unless the user explicitly asks for
  the 'status' field in the request.

  eg.: GET /v2.0/networks/xxx?field=status

  This mechanism is currently implemented for show operations only
  whereas it should be enabled for list operations as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1229145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1130205] Re: Provide a clear error when attempting to use l3-agent with NVP

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1130205

Title:
  Provide a clear error when attempting to use l3-agent with NVP

Status in vmware-nsx:
  New

Bug description:
  The NVP plugin does not use the l3 agent.
  Running it won't cause harm, but it will still cause notification traffic and 
similar to be sent to an L3 agent, whose interface will never be reached by any 
VM.

  This bug is about providing a warning in logs that the l3 agent should
  not be run if detected and Quantum is running with the NVP plugin.

  Although tagged with 'Nicira', this applies to every plugin which does
  not leverage the l3 agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1130205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433553] Re: DVR: remove interface fails on NSX-mh

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433553

Title:
  DVR: remove interface fails on NSX-mh

Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  Fix Committed

Bug description:
  The DVR mixin, which the MH plugin is now using, assumes that routers
  are deployed on l3 agents, which is not the case for VMware plugins.

  While it is generally wrong that a backend agnostic management layer
  makes assumptions about the backend, the VMware plugins should work
  around this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262464] Re: enable_snat does not work in MidoNet

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262464

Title:
  enable_snat does not work in MidoNet

Status in networking-midonet:
  New

Bug description:
  Midonet does not currently support the "enable_snat" option in its
  plugin. This bug is fix the plugin to support that option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1262464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443825] Re: Midonet DHCP driver needs to call mm-ctl unbind

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443825

Title:
  Midonet DHCP driver needs to call mm-ctl unbind

Status in networking-midonet:
  New

Bug description:
  Midonet DHCP driver does not call mm-ctl unbind. This leaves Midonet
  without the notification that a port has been unbound, and will
  prevent Midonet from properly cleaning up related resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1443825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433550] Re: DVR: VMware NSX plugins do not need centralized snat interfaces

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433550

Title:
  DVR: VMware NSX plugins do not need centralized snat interfaces

Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  Fix Committed

Bug description:
  When creating a distributed router, a centralized SNAT port is
  created.

  However since the NSX backend does not need it to implement
  distributed routing, this is just a waste of resources (port and IP
  address). Also, it might confuse users with admin privileges as they
  won't know what these ports are doing.

  So even if they do no harm they should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262122] Re: Clean up routes on FIP disassociation in Midonet plugin

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: networking-midonet

** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262122

Title:
  Clean up routes on FIP disassociation in Midonet plugin

Status in networking-midonet:
  New

Bug description:
  The update of the floatingip_db was happening before the disassociation, so 
the disassociation happened on new data. The old data was required to identify 
the router_id. This fix changes the order of the disassociation and the 
floatingip_db
  update. This fix has the side affect of fixing the tempest test 
"test_floating_ips" in the midonet plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1262122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433554] Re: DVR: metada network not created for NSX-mh

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433554

Title:
  DVR: metada network not created for NSX-mh

Status in neutron juno series:
  Fix Committed
Status in vmware-nsx:
  Fix Committed

Bug description:
  When creating a distributed router, instances attached to it do not
  have metadata access.

  This is happening because the metadata network is not being created
  and connected to the router - since the process for handling metadata
  network has not been updated with the new interface type for DVR
  router ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1102301] Re: nicira plugin - perform rollback when router ops fail

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1102301

Title:
  nicira plugin - perform rollback when router ops fail

Status in vmware-nsx:
  New

Bug description:
  The Nicira NVP Quantum plugin sometimes operates 'transactions' on the
  NVP platform, meaning that several NVP API read/write operations are
  performed for a given Quantum API operation.

  As the Quantum API operation satisfies ACID properties, the same
  should happen for the operation on the NVP side.

  This happens, for instance, when router interfaces are added or
  removed to a router. In case of NVP failures, the DB operation would
  complete successfully but the NVP resources might be only partially
  created.

  In case of NVP failures all NVP resources created/modified up to the
  failure should be rolled back, and the DB operation should be undone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1102301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274682] Re: Vlans are not cleanly deleted in SQL database leaving stale entries

2015-09-30 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274682

Title:
  Vlans are not cleanly deleted in SQL database leaving stale entries

Status in Cisco Openstack:
  New

Bug description:
  Steps to see

  - Create the Tenant with subnet , 
  - instantiate a VM on the subnet which will create a VLAN using neutron 
plugin for Nexus . 
  - Now delete the subnet and the vlan on the switch . 
  - Create the subnet again , the controller will again allocate the same 
segmentation id but it will not create the Vlan again on Nexus switch 

  When looking the SQL database the previous vlan entries are still
  present in the database and the controller is not able to assign the
  vlans correctly . The workaround is to select a different range of
  segmentation ids in the plugin .

  Error : 
  2014-01-29 13:47:03.059 2562 WARNING neutron.db.agentschedulers_db [-] Fail 
scheduling network {'status': u'ACTIVE', 'subnets': 
[u'5146ec1e-ad1d-4ca2-9e1d-e9e97126ae05'], 'name': u'External-1', 'provider
  :physical_network': u'physnet1', 'admin_state_up': True, 'tenant_id': 
u'adfdcc7e64904ab1b812ad1cbbf92f1a', 'provider:network_type': u'vlan', 
'router:external': False, 'shared': False, 'id': u'd71796ca-d1
  0c-4f4d-b742-1e720ce8b94e', 'provider:segmentation_id': 504L}
  2014-01-29 13:47:03.078 2562 ERROR neutron.api.v2.resource [-] 
add_router_interface failed
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 185, in 
_handle_action
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 439, in add_router_interface
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource raise 
cexc.SubnetNotSpecified()
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource 
SubnetNotSpecified: No subnet_id specified for router gateway.
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource 
  2014-01-29 13:51:28.220 2562 WARNING neutron.db.agentschedulers_db [-] Fail 
scheduling network {'status': u'ACTIVE', 'subnets': 
[u'e98ed533-bbc6-44b4-909a-a94992875c3d'], 'name': u'Tenant_coke', 'provide
  r:physical_network': u'physnet1', 'admin_state_up': True, 'tenant_id': 
u'adfdcc7e64904ab1b812ad1cbbf92f1a', 'provider:network_type': u'vlan', 
'router:external': False, 'shared': False, 'id': u'f1f0c30c-a
  14b-4b33-8212-5b8763dbd594', 'provider:segmentation_id': 503L}

  
  SQL Entries

  Old Stale  Entries

   
  mysql> SELECT * FROM ovs_network_bindings;
  
+--+--+--+-+
  | network_id   | network_type | physical_network | 
segmentation_id |
  
+--+--+--+-+
  | 07f60f68-8482-4972-aa7c-398f4cdf6abd | vlan | physnet1 |
 500 |
  | 8b034e8d-7b6a-4198-94bd-5278d7934c78 | vlan | physnet1 |
 502 |
  
+--+--+--+-+
  2 rows in set (0.00 sec)

  mysql> SELECT * FROM subnets;
  
+--+--+-+--++--+---+-++
  | tenant_id| id   | 
name| network_id   | ip_version | cidr 
| gateway_ip| enable_dhcp | shared |
  
+--+--+-+--++--+---+-++
  | adfdcc7e64904ab1b812ad1cbbf92f1a | 186b1806-2d44-420a-a48e-ea027dcae543 | 
Inet-1  | 8b034e8d-7b6a-4198-94bd-5278d7934c78 |  4 | 10.111.111.0/24  
| 10.111.111.1  |   1 |  0 |
  | adfdcc7e64904ab1b812ad1cbbf92f1a | 2fce01d6-6616-4137-a451-d63c80567929 | 
Ext-Net | 07f60f68-8482-4972-aa7c-398f4cdf6abd |  4 | 192.168.111.0/24 
| 192.168.111.1 |   1 |  1 |
  

[Yahoo-eng-team] [Bug 1201488] Re: Cisco Nexus plugin delete port device_owner check

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1201488

Title:
  Cisco Nexus plugin delete port device_owner check

Status in networking-cisco:
  New

Bug description:
  The Cisco Nexus plugin doesn't check the device_owner before deleting
  ports and attempts hardware configuration for ports that are not
  attached to any instances. The plugin should check the device_owner
  before deleting a port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1201488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271276] Re: db migration of table brocadenetworks incorrectly specifies id as int

2015-09-30 Thread Armando Migliaccio
** Also affects: networking-brocade
   Importance: Undecided
   Status: New

** Tags removed: brocade

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271276

Title:
  db migration of table brocadenetworks incorrectly specifies id as int

Status in networking-brocade:
  New

Bug description:
  Incorrect column specification of brocadenetworks id, should be
  string(36) instead of int

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1271276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292192] Re: NSX: adding interface to router not found in nsx returns 404

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1292192

Title:
  NSX: adding interface to router not found in nsx returns 404

Status in vmware-nsx:
  New

Bug description:
  This should not fail but instead set the router port to error state
  same as we do for network ports.

  2014-03-13 11:41:57.744 ERROR NeutronPlugin [-] An exception occurred while 
creating the quantum port 8c6147d8-d9f3-42d5-a6fd-9fccf0b70662 on the NSX 
plaform
  2014-03-13 11:41:57.744 TRACE NeutronPlugin Traceback (most recent call last):
  2014-03-13 11:41:57.744 TRACE NeutronPlugin   File 
"/tmp/neutron/neutron/plugins/vmware/plugins/base.py", line 564, in 
_nsx_create_router_port
  2014-03-13 11:41:57.744 TRACE NeutronPlugin subnet_ids=[subnet_id])
  2014-03-13 11:41:57.744 TRACE NeutronPlugin   File 
"/tmp/neutron/neutron/plugins/vmware/plugins/base.py", line 242, in 
_create_and_attach_router_port
  2014-03-13 11:41:57.744 TRACE NeutronPlugin port_data.get('mac_address'))
  2014-03-13 11:41:57.744 TRACE NeutronPlugin   File 
"/tmp/neutron/neutron/plugins/vmware/nsxlib/router.py", line 336, in 
create_router_lport
  2014-03-13 11:41:57.744 TRACE NeutronPlugin cluster=cluster)
  2014-03-13 11:41:57.744 TRACE NeutronPlugin   File 
"/tmp/neutron/neutron/plugins/vmware/nsxlib/__init__.py", line 100, in 
do_request
  2014-03-13 11:41:57.744 TRACE NeutronPlugin raise exception.NotFound()
  2014-03-13 11:41:57.744 TRACE NeutronPlugin NotFound: An unknown exception 
occurred.
  2014-03-13 11:41:57.744 TRACE NeutronPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1292192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341791] Re: NSX net-gateway extension: cannot update devices

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341791

Title:
  NSX net-gateway extension: cannot update devices

Status in vmware-nsx:
  New

Bug description:
  Once a network gateway is defined, the NSX extension does not allow
  the device list to be updated:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/vmware/extensions/networkgw.py#L44

  This forces user to destroy the gateway, recreate it, and re-establish
  all connections everytime a gateway device needs to be replaced.

  Allowing for gateway device update, which is supported by the backend,
  will spare the users a lot of pain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1341791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355424] Re: NSX: update_lswitch method adds duplicate version tag

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355424

Title:
  NSX: update_lswitch method adds duplicate version tag

Status in vmware-nsx:
  New

Bug description:
  The nsxlib.update_lswitch() method automatically adds the neutron
  version as a tag to the string. Because of this it's possible to have
  this same value get added many times which exceeds the number of tags
  nsx allows.

  2014-08-11 11:54:20.980 ERROR neutron.plugins.vmware.api_client.client 
[req-2e1404a4-52dd-4348-bd3e-1bf863c6066b admin 
c76d95a547294fa3a377f2039136305c] Server Error Message: 
LogicalForwardingElementConfig.tags: must contain at most 5 items (value is 
[{'scope': 'os_tid', 'tag': 'c76d95a547294fa3a377f2039136305c'}, {'scope': 
'quantum', 'tag': '2014.2.dev196.g18a10fa'}, {'scope': 'quantum', 'tag': 
'2014.2.dev196.g18a10fa'}, {'scope': 'quantum_net_id', 'tag': 
'788e5b7c-4cff-4440-b2ea-3519b34229e7'}, {'scope': 'os_tid', 'tag': 
'c76d95a547294fa3a377f2039136305c'}, {'scope': 'multi_lswitch', 'tag': 'True'}])
  2014-08-11 11:54:20.980 ERROR NeutronPlugin 
[req-2e1404a4-52dd-4348-bd3e-1bf863c6066b admin 
c76d95a547294fa3a377f2039136305c] An exception occurred while selecting logical 
switch for the port

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1355424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496929] Re: instance launch failed: TooManyExternalNetworks: More than one external network exists

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496929

Title:
  instance launch failed: TooManyExternalNetworks: More than one
  external network exists

Status in OpenStack Compute (nova):
  New
Status in vmware-nsx:
  New

Bug description:
  Hello, I followed the documentation  " http://docs.openstack.org/kilo
  /config-reference/content/vmware.html " to connect ESXi with OpenStack
  Juno, i put the following configuration on the compute node, nova.conf
  file :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver
   
  [vmware]
  host_ip=
  host_username=
  host_password=
  cluster_name=
  datastore_regex=

  And in the nova-compute.conf :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver

  
  But in vain, on the juno OpenStack Dashboard when i whant to launch instance, 
i have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

  attached the logs on the controller and compute node:

  ==> nova-conductor

  ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Traceback (most recent call 
last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", 
line 2054, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2185, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
0c1ee287-edfe-4258-bb43-db23338bbe90 was re-scheduled: Network could not be 
found for bridge br-int\n']
  2015-09-17 15:31:34.921 2432 WARNING nova.scheduler.driver 
[req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] NoValidHost exception with message: 'No 
valid host was found.'

  
  => neutron 
  2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] Returning exception More than one 
external network exists to caller
  2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/l3_rpc.py", line 
149, in get_external_network_id\nnet_id = 
self.plugin.get_external_network_id(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/db/external_net_db.py", line 161, in 
get_external_network_id\nraise n_exc.TooManyExternalNetworks()\n', 
'TooManyExternalNetworks: More than one e
 xternal network exists\n']

  
  =>  compute Node / nova-compute

  2015-09-17 15:28:22.323 5944 ERROR oslo.vmware.common.loopingcall [-] in 
fixed duration looping call
  2015-09-17 15:31:33.550 5944 ERROR nova.compute.manager [-] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] Instance failed to spawn

  
  => nova-network / nova-compute

  2015-09-17 11:21:10.840 1363 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on ControllerNode01:5672 is unreachable: [Errno 111] ECONNREFUSED. 
Trying again in 3 seconds.
  2015-09-17 11:23:02.874 1363 ERROR nova.openstack.common.periodic_task [-] 
Error during VlanManager._disassociate_stale_fixed_ips: Timed out waiting for a 
reply to message ID b6d62061352e4590a37cbc0438ea3ef0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352932] Re: NVP plugin extension lswitch created with wrong transport zone binding

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352932

Title:
  NVP plugin extension lswitch created with wrong transport zone binding

Status in vmware-nsx:
  New

Bug description:
  Nicira NVP plugin create extension lswitches when port number on a
  bridged (flat) network reaches upper limit (configured by
  max_lp_per_bridged_ls in nvp.ini).

  However, in Havana version, when extension lswitches are created for
  "flat" networks, it use wrong transport zone binding.

  Expected: use same transport zone binding as specified by the network.

  Result: use default transport zone binding configured by
  default_tz_uuid and default_transport_type in nvp.ini.

  This bug is introduced in Havana version.

  Root cause:
  in Havana, tz_uuid was generated in a new function 
_convert_to_nvp_transport_zones(), and it checks if mpnet.SEGMENTS is not set 
for the network, the default tz_uuid and transport type are used:
  if (network and not attr.is_attr_set(
  network.get(mpnet.SEGMENTS))):
  return [{"zone_uuid": cluster.default_tz_uuid,
   "transport_type": cfg.CONF.NVP.default_transport_type}]

  For a bridged network without VLAN, this mpnet.SEGMENTS is not set. So
  the function returns with the default binding rather than use the tz
  binding of the specified network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1352932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334430] Re: NSX: timeout can result in nat rule conflict

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334430

Title:
  NSX: timeout can result in nat rule conflict

Status in vmware-nsx:
  New

Bug description:
  2014-06-25 01:37:35.920 29149 WARNING 
neutron.plugins.vmware.api_client.request [-] [0] Failed request 'POST 
https://20.0.0.22:443//ws.v1/lrouter/e6ae51d1-9960-4525-b811-e44d97b9d577/nat': 
'timed out' (0.592037200928 seconds)
  2014-06-25 01:37:35.921 29149 WARNING neutron.plugins.vmware.api_client.base 
[-] [0] Connection returned in bad state, reconnecting to https://20.0.0.22:443
  2014-06-25 01:37:37.608 29149 ERROR neutron.plugins.vmware.api_client.client 
[-] Received error code: 409
  2014-06-25 01:37:37.609 29149 ERROR neutron.plugins.vmware.api_client.client 
[-] Server Error Message: Rule already added to logical router
  2014-06-25 01:37:37.609 29149 ERROR neutron.api.v2.resource [-] 
add_router_interface failed
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 194, in 
_handle_action
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/plugins/base.py", line 
1712, in add_router_interface
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource 
match_criteria={'destination_ip_addresses': subnet['cidr']})
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/nsxlib/versioning.py", 
line 44, in dispatch_versioned_function
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource return 
func(cluster, *args, **func_kwargs)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/nsxlib/router.py", 
line 504, in create_lrouter_nosnat_rule_v3
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource return 
_create_lrouter_nat_rule(cluster, router_id, nat_rule_obj)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/nsxlib/router.py", 
line 452, in _create_lrouter_nat_rule
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource 
cluster=cluster)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/nsxlib/__init__.py", 
line 96, in do_request
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource res = 
cluster.api_client.request(*args)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/api_client/client.py", 
line 119, in request
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource 
exception.ERROR_MAPPINGS[status](response)
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/vmware/api_client/exception.py",
 line 91, in fourZeroNine
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource raise 
Conflict()
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource Conflict: Request 
conflicts with configuration on a different entity.
  2014-06-25 01:37:37.609 29149 TRACE neutron.api.v2.resource
  2014-06-25 02:33:58.535 29149 ERROR neutron.plugins.vmware.api_client.client 
[-] Received error code: 503
  2014-06-25 02:33:58.535 29149 ERROR neutron.plugins.vmware.api_client.client 
[-] Server Error Message: 503 Service Unavailable

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1334430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352925] Re: NVP plugin extension lswitch lport cannot be updated or deleted

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352925

Title:
  NVP plugin extension lswitch lport cannot be updated or deleted

Status in vmware-nsx:
  New

Bug description:
  Nicira NVP plugin create extension lswitches when port number on a
  bridged (flat) network reaches upper limit (configured by
  max_lp_per_bridged_ls in nvp.ini). However, the ports created on
  extension lswitches cannot be updated or deleted.

  For example, if flat network net1 has already 
  ports, and a new instance is booted on this network, NVP plugin
  creates an extension logical switch in NSX controller, which has an
  uuid different from the neutron network, and the new logical port
  corresponding to the neutron port will be created on this extension
  logical switch.

  However, if run neutron port-update for this new port, it will fail:
  2014-08-05 07:26:09,984 59933072ERROR [NVPApiHelper] Server Error 
Message: lport 'd7df5a6e-ba8b-4a75-9c1d-11279b70d2a0' is not bound to lswitch 
'7607649d-c29a-4eb2-863a-606e96175185' (it is bound to lswitch 
'ccbfdf47-b84c-426d-820c-18984e06d859')
  2014-08-05 07:26:09,984 59933072ERROR [neutron.plugins.nicira.nvplib] 
Port or Network not found, Error: An unknown exception occurred.
  2014-08-05 07:26:09,985 59933072ERROR [NeutronPlugin] Unable to update 
port id: d7df5a6e-ba8b-4a75-9c1d-11279b70d2a0.

  Similarly, if run nova delete , there is similar error:
  2014-08-05 02:08:50,938 59933072DEBUG 
[neutron.plugins.nicira.api_client.request_eventlet] [7] Completed request 
'DELETE 
/ws.v1/lswitch/7607649d-c29a-4eb2-863a-606e96175185/lport/8eca6560-601d-49bc-896c-c6a4d7059110':
 404
  2014-08-05 02:08:50,938 59932752ERROR [NVPApiHelper] Received error code: 
404
  2014-08-05 02:08:50,938 59932752ERROR [NVPApiHelper] Server Error 
Message: lport '8eca6560-601d-49bc-896c-c6a4d7059110' is not bound to lswitch 
'7607649d-c29a-4eb2-863a-606e96175185' (it is bound to lswitch 
'720a6577-60cb-45c6-8821-f276de8cb804')
  2014-08-05 02:08:50,939 59932752ERROR [neutron.plugins.nicira.nvplib] 
Port or Network not found

  Root cause:
  The extension logical switch has different uuid, but when updating/deleting 
the port, the original neutron network uuid is used.

  This can be fixed either in nvp plugin to save lswitch information in
  neutron db, or fixed in NSX controller so that if a port is found in
  extension lswitch, it should be updated/deleted instead of reporting
  error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1352925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263735] Re: NVP plugin does not support VIF ports error

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263735

Title:
  NVP plugin does not support VIF ports error

Status in OpenStack Compute (nova):
  Invalid
Status in vmware-nsx:
  New

Bug description:
  Unable to delete a server using nova api. No errors logged at nova
  side.

  On neutron side, it complain that it doesn't support regular VIF ports
  on external networks.

  2013-12-23 17:19:11,516 (NeutronPlugin): ERROR NeutronPlugin _nvp_delete_port 
NVP plugin does not support regular V
  IF ports on external networks. Port 6d382dfd-f826-45d3-8818-48c34f8a8908 will 
be down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1263735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332017] Re: NSX: lock NSX sync cache while doing synchronization

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332017

Title:
  NSX: lock NSX sync cache while doing synchronization

Status in vmware-nsx:
  New

Bug description:
  This is somewhat related to bug 1329650 but more about an enhancement than a 
fix.
  Basically in order to avoid any sort of race in access to the NSX sync cache, 
NSXsync operations should acquire a lock before operating on the NSX sync cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1332017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295494] Re: NVP service plugin should check advanced service in use before deleting a router

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295494

Title:
  NVP service plugin should check advanced service in use before
  deleting a router

Status in vmware-nsx:
  New

Bug description:
  When using NVP advanced service plugin, it should check whether there
  is service inserted into the router before deleting it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1295494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287423] Re: NSX: only catch not found update_port

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287423

Title:
  NSX: only catch not found update_port

Status in vmware-nsx:
  New

Bug description:
  Update port currently wraps the call to NSX in try/except Exception:
  which could hide bugs. This patch  makes sure only NotFound is caught
  which and not everything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1287423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293723] Re: delete port fail if it is already deleted in nvp

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293723

Title:
  delete port fail if it is already deleted in nvp

Status in vmware-nsx:
  New

Bug description:
  Similar to https://bugs.launchpad.net/neutron/+bug/1291690, but when
  deleting ports. If the port is already deleted from nvp, the neutron
  fail to delete it from it's database.

  Here is the command output:

  sunshine:openpro bhuvan$ neutron net-list
  
+--+-+---+
  | id   | name| 
subnets   |
  
+--+-+---+
  | 048d79c4-17b7-4a5e-b1dc-c7829306431a | openstack-uplink| 
9479f0bd-7a2a-4b54-99c3-b858864dca9b  |
  | 1f740648-85c5-4081-a04d-bba6a3eb6ecf | tempest-net | 
66ebd803-21d2-4cdf-a336-61dd421589d4 17.177.36.192/26 |
  | 6d382dfd-f826-45d3-8818-48c34f8a8908 | tempest-uplink  | 
67056b2d-a924-4456-9050-ed0baa0eaf1a 17.176.14.200/29 |
  | e5403cc5-a841-4681-b4b5-df03d3fca38e | test-network--tempest-177070938 | 
367e7fd0-0cc5-4ec9-9ae0-0d51b31ac184 10.100.0.32/28   |
  
+--+-+---+
  sunshine:openpro bhuvan$ neutron net-delete  
e5403cc5-a841-4681-b4b5-df03d3fca38e
  Unable to complete operation on network e5403cc5-a841-4681-b4b5-df03d3fca38e. 
There are one or more ports still in use on the network.

  sunshine:openpro bhuvan$ neutron port-list
  
+--+--+---+--+
  | id   | name | mac_address   | fixed_ips 
   |
  
+--+--+---+--+
  | 7d3e834a-b690-4a04-8570-203e9e5605dc |  | fa:16:3e:1d:9b:dc | 
{"subnet_id": "66ebd803-21d2-4cdf-a336-61dd421589d4", "ip_address": 
"17.177.36.193"} |
  | a6a3c2e5-01d3-4215-aef5-103bf46359af |  | fa:16:3e:8d:58:1d | 
{"subnet_id": "367e7fd0-0cc5-4ec9-9ae0-0d51b31ac184", "ip_address": 
"10.100.0.33"}   |
  
+--+--+---+--+
  sunshine:openpro bhuvan$ neutron port-delete  
a6a3c2e5-01d3-4215-aef5-103bf46359af
  409-{u'NeutronError': {u'message': u'Port 
a6a3c2e5-01d3-4215-aef5-103bf46359af has owner network:router_interface and 
therefore cannot be deleted directly via the port API.', u'type': 
u'L3PortInUse', u'detail': u''}}
  sunshine:openpro bhuvan$ neutron subnet-delete 
367e7fd0-0cc5-4ec9-9ae0-0d51b31ac184
  409-{u'NeutronError': {u'message': u'Unable to complete operation on subnet 
367e7fd0-0cc5-4ec9-9ae0-0d51b31ac184. One or more ports have an IP allocation 
from this subnet.', u'type': u'SubnetInUse', u'detail': u''}}
  sunshine:openpro bhuvan$ neutron router-list
  
+--+---+--+
  | id   | name  | 
external_gateway_info|
  
+--+---+--+
  | fce78016-32b2-4c69-a0d7-a50c7a171842 | router--tempest-216451384 | null 
|
  | fe674008-cd0f-4e50-b12c-c65daff330aa | tempest-router| 
{"network_id": "6d382dfd-f826-45d3-8818-48c34f8a8908", "enable_snat": false} |
  
+--+---+--+
  sunshine:openpro bhuvan$ neutron router-interface-delete 
fce78016-32b2-4c69-a0d7-a50c7a171842 367e7fd0-0cc5-4ec9-9ae0-0d51b31ac184
  404-{u'NeutronError': {u'message': u'Port 
8263bc86-4daf-42b5-b49b-83009e558348 could not be found on network 
afc6b0dc-9f15-41f4-8270-7cb32a3cc12b', u'type': u'PortNotFoundOnNetwork', 
u'detail': u''}}

  
  Here is the neutron log:

  2014-03-17 17:36:21,622 (neutron.plugins.vmware.api_client.eventlet_request): 
DEBUG eventlet_request _handle_request [0] Completed 

[Yahoo-eng-team] [Bug 1332353] Re: NSX: wrong src IP address in VM connection via floating IP

2015-09-30 Thread Armando Migliaccio
** Also affects: vmware-nsx
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332353

Title:
  NSX: wrong src IP address in VM connection via floating IP

Status in vmware-nsx:
  New

Bug description:
  Scenario:

  Two VM on the same network (VM_1 and VM_2) with internal addresses
  INT_1 and INT_2 both associated with floating IPs FIP_1 and FIP_2.

  VM_1 connects to VM_2 (e.g.: ssh) through VM_2 floating IP
  e.g.: VM_1> ssh user@FIP_2

  on VM_2 the ssh connection has:
  - INT_2 as local address
  - FIP_2 as remote address

  This is not entirely correct.
  It would be advisable to have instead FIP_1 as remote address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1332353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501587] [NEW] test

2015-09-30 Thread Armando Migliaccio
Public bug reported:

test

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501587

Title:
  test

Status in neutron:
  Invalid

Bug description:
  test

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501440] [NEW] Ironic driver uses node's UUID instead of name

2015-09-30 Thread Kenneth Koski
Public bug reported:

When nova creates a hypervisor from an Ironic node, the hypervisor is
created with hypervisor_hostname set to the UUID of the Ironic node.
This is inconvenient, as it's not very human-friendly. It would be nice
if the hypervisor_hostname attribute could be set to the node's name, or
at least some combination, such as `node.name + '-' + node.uuid`. The
relevant line is here:

https://github.com/openstack/nova/blob/stable/kilo/nova/virt/ironic/driver.py#L290

This is on CentOS 7, and yum shows me as running version 2015.1.1.dev18
for all nova packages.

I tried just changing the line above to read `'hypervisor_hostname':
str(node.name),`, but this caused no hypervisors to get created,
although nothing crashed, which makes it seem like there's more that
needs to be done than just changing that line.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501440

Title:
  Ironic driver uses node's UUID instead of name

Status in OpenStack Compute (nova):
  New

Bug description:
  When nova creates a hypervisor from an Ironic node, the hypervisor is
  created with hypervisor_hostname set to the UUID of the Ironic node.
  This is inconvenient, as it's not very human-friendly. It would be
  nice if the hypervisor_hostname attribute could be set to the node's
  name, or at least some combination, such as `node.name + '-' +
  node.uuid`. The relevant line is here:

  
https://github.com/openstack/nova/blob/stable/kilo/nova/virt/ironic/driver.py#L290

  This is on CentOS 7, and yum shows me as running version
  2015.1.1.dev18 for all nova packages.

  I tried just changing the line above to read `'hypervisor_hostname':
  str(node.name),`, but this caused no hypervisors to get created,
  although nothing crashed, which makes it seem like there's more that
  needs to be done than just changing that line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501590] [NEW] weather applet crashes on logout

2015-09-30 Thread Kevin Benton
Public bug reported:

Weather applet crashes on logout

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501590

Title:
  weather applet crashes on logout

Status in neutron:
  Invalid

Bug description:
  Weather applet crashes on logout

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501597] [NEW] Adding Brocade Vyatta 5600 support in Neutron-Fwass

2015-09-30 Thread bharath
Public bug reported:

Currently Brocade support only vyatta 5400. Changes are required in
Neutron-Fwaas to support Vyatta 5600 image

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501597

Title:
  Adding Brocade Vyatta 5600 support in Neutron-Fwass

Status in neutron:
  New

Bug description:
  Currently Brocade support only vyatta 5400. Changes are required in
  Neutron-Fwaas to support Vyatta 5600 image

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236711] Re: Create volume snapshot action should check quotas

2015-09-30 Thread Rob Cresswell
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1236711

Title:
  Create volume snapshot action should check quotas

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Currently there's no quota check when creating a snapshot from a
  volume. We should do something similar to the CreateVolume action,
  which disables the button if the action is bound to fail due to quota.

  The relevant cinder quotas are:

  snapshots
  gigabytes  (if cinder is configured to include snapshot size in overall 
gigabytes quota)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1236711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >