[Yahoo-eng-team] [Bug 1497694] [NEW] race conditions in security-group additions to port

2015-09-20 Thread Ran Ziv
Public bug reported:

When attempting to connect multiple security groups to a single port
concurrently, some of the security groups may not actually get connected
despite the call returning successfully and with no error.

see similar bug but when connecting the security-groups to a server
rather than a port here: https://bugs.launchpad.net/nova/+bug/1417975

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497694

Title:
  race conditions in security-group additions to port

Status in neutron:
  New

Bug description:
  When attempting to connect multiple security groups to a single port
  concurrently, some of the security groups may not actually get
  connected despite the call returning successfully and with no error.

  see similar bug but when connecting the security-groups to a server
  rather than a port here: https://bugs.launchpad.net/nova/+bug/1417975

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID >255 not recognized

2015-09-20 Thread Sean McGinnis
Post-Kilo Cinder uses os-brick. Since this fix went in to os-brick and
was released with version 0.4.0, Cinder now should work as of the
requirements update to that library version in commit
e50576c2d495e6828f335029e713655c27a6fc5b.

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID >255 not recognized

Status in Cinder:
  Invalid
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) kilo series:
  New
Status in os-brick:
  Fix Released

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID >255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs <=0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497689] [NEW] Flavor Resource Limit Disk IO do not create limit entry in libvirt.xml

2015-09-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Openstack KILO, Cinder with Ceph backend
When i add IOPS, Read,Write limit  metada to flavor. The instances that was 
created from that Flavor do not have limit record in libvirt.xml
Network and CPU still work. So that the instances have maximum diskIO 
performance. This is flavor detail. I attach full libvirt.xml of instance
=
---+
| Property   | Value


   |
++-+
| OS-FLV-DISABLED:disabled   | False


   |
| OS-FLV-EXT-DATA:ephemeral  | 0


   |
| disk   | 20   


   |
| extra_specs| {"quota:disk_write_bytes_sec": "10485760", 
"quota:vif_outbound_average": "1", "quota:vif_inbound_peak": "1", 
"quota:vif_inbound_average": "1", "quota:disk_read_bytes_sec": "10485760", 
"quota:vif_outbound_peak": "1"} |
| id | 2229b175-9b72-4258-a115-ba2e66dcff28 


   |
| name   | m1.tiny  


   |
| os-flavor-access:is_public | True 


   |
| ram| 512  


   |
| rxtx_factor| 1.0  


   |
| swap   |  


   |
| vcpus  | 1


   |
++-+

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: flavor in-stable-kilo
-- 
Flavor Resource Limit Disk IO do not create limit  entry in libvirt.xml
https://bugs.launchpad.net/bugs/1497689
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497689] Re: Flavor Resource Limit Disk IO do not create limit entry in libvirt.xml

2015-09-20 Thread prescolt
** Project changed: puppet-nova => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497689

Title:
  Flavor Resource Limit Disk IO do not create limit  entry in
  libvirt.xml

Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack KILO, Cinder with Ceph backend
  When i add IOPS, Read,Write limit  metada to flavor. The instances that was 
created from that Flavor do not have limit record in libvirt.xml
  Network and CPU still work. So that the instances have maximum diskIO 
performance. This is flavor detail. I attach full libvirt.xml of instance
  =
  
---+
  | Property   | Value  


 |
  
++-+
  | OS-FLV-DISABLED:disabled   | False  


 |
  | OS-FLV-EXT-DATA:ephemeral  | 0  


 |
  | disk   | 20 


 |
  | extra_specs| {"quota:disk_write_bytes_sec": "10485760", 
"quota:vif_outbound_average": "1", "quota:vif_inbound_peak": "1", 
"quota:vif_inbound_average": "1", "quota:disk_read_bytes_sec": "10485760", 
"quota:vif_outbound_peak": "1"} |
  | id | 2229b175-9b72-4258-a115-ba2e66dcff28   


 |
  | name   | m1.tiny


 |
  | os-flavor-access:is_public | True   


 |
  | ram| 512


 |
  | rxtx_factor| 1.0


 |
  | swap   |


 |
  | vcpus  | 1  


 |
  
++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497740] [NEW] nova API proxy to neutron should avoid race-ful behavior

2015-09-20 Thread Ryan Moats
Public bug reported:

Version of Nova/OpenStack: liberty

Calls to associate a floating IP with an instance currently returns a
202 status.  When proxying these calls to neutron, just returning 202
without having validated the status of the floating IP first leads to
raceful failures in tempest scenario tests (for example, see the
test_volume_boot_pattern failure in
http://logs.openstack.org/20/225420/1/check/gate-tempest-dsvm-neutron-
dvr/a78a052/logs/testr_results.html.gz)

Is it possible to (when proxying to neutron) first verify the status of
the neutron floating IP before returning the 202 status

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497740

Title:
  nova API proxy to neutron should avoid race-ful behavior

Status in OpenStack Compute (nova):
  New

Bug description:
  Version of Nova/OpenStack: liberty

  Calls to associate a floating IP with an instance currently returns a
  202 status.  When proxying these calls to neutron, just returning 202
  without having validated the status of the floating IP first leads to
  raceful failures in tempest scenario tests (for example, see the
  test_volume_boot_pattern failure in
  http://logs.openstack.org/20/225420/1/check/gate-tempest-dsvm-neutron-
  dvr/a78a052/logs/testr_results.html.gz)

  Is it possible to (when proxying to neutron) first verify the status
  of the neutron floating IP before returning the 202 status

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491126] Re: Django 1.8 alters URL handling

2015-09-20 Thread Rob Cresswell
Rolled into blueprint: https://blueprints.launchpad.net/horizon/+spec
/drop-dj17

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491126

Title:
  Django 1.8 alters URL handling

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Django 1.8 offers a cleaner way to display URLs:

  https://docs.djangoproject.com/en/1.8/releases/1.8/#django-conf-urls-
  patterns

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497830] [NEW] Neutron subprojects cannot depend on alembic revisions from other neutron subprojects

2015-09-20 Thread Henry Gessau
Public bug reported:

If networking-foo depends on networking-bar, then a requirement may
arise for an alembic revision in networking-foo to depend on an alembic
revision in networking-bar. Currently this cannot be accommodated
because each subproject has its own alembic environment.

To solve this issue we need to switch to one alembic environment
(neutron's) for all neutron subprojects.

** Affects: neutron
 Importance: High
 Assignee: Henry Gessau (gessau)
 Status: New


** Tags: db

** Tags added: db

** Changed in: neutron
 Assignee: (unassigned) => Henry Gessau (gessau)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497830

Title:
  Neutron subprojects cannot depend on alembic revisions from other
  neutron subprojects

Status in neutron:
  New

Bug description:
  If networking-foo depends on networking-bar, then a requirement may
  arise for an alembic revision in networking-foo to depend on an
  alembic revision in networking-bar. Currently this cannot be
  accommodated because each subproject has its own alembic environment.

  To solve this issue we need to switch to one alembic environment
  (neutron's) for all neutron subprojects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497823] [NEW] LBaaS V2 Octavia driver should create get its own context/session

2015-09-20 Thread Brandon Logan
Public bug reported:

The LBaaS V2 Octavia driver currently uses the context that is passed
from the plugin.  Since the driver spins up a thread to poll Octavia,
each thread should create its own context.

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: In Progress


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497823

Title:
  LBaaS V2 Octavia driver should create get its own context/session

Status in neutron:
  In Progress

Bug description:
  The LBaaS V2 Octavia driver currently uses the context that is passed
  from the plugin.  Since the driver spins up a thread to poll Octavia,
  each thread should create its own context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp