[Yahoo-eng-team] [Bug 1497125] [NEW] fail to live-migrate a booted from volume vm which have a local disk.

2015-09-17 Thread Hiroyuki Eguchi
Public bug reported:

fail to live-migrate a booted from volume vm which have a local disk.


message: "hostA is not on shared storage: Live migration can not be used 
without shared storage except a booted from volume VM which does not have a 
local disk."

make it enable by copying local files (swap, ephemeral disk, config-
drive) from the source host to the destination host in
pre_live_migration.

** Affects: nova
 Importance: Undecided
 Assignee: Hiroyuki Eguchi (h-eguchi)
 Status: New

** Description changed:

  fail to live-migrate a booted from volume vm which have a local disk.
  
  
- 
- message: "hostA is not on shared storage: Live migration can not be used
- without shared storage except a booted from volume VM which does not
- have a local disk."
+ message: "hostA is not on shared storage: Live migration can not be used 
without shared storage except a booted from volume VM which does not have a 
local disk."
  
  make it enable by copying local files (swap, ephemeral disk, config-
  drive) from the source host to the destination host in
  pre_live_migration.

** Changed in: nova
 Assignee: (unassigned) => Hiroyuki Eguchi (h-eguchi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497125

Title:
  fail to live-migrate a booted from volume vm which have a local disk.

Status in OpenStack Compute (nova):
  New

Bug description:
  fail to live-migrate a booted from volume vm which have a local disk.

  
  message: "hostA is not on shared storage: Live migration can not be used 
without shared storage except a booted from volume VM which does not have a 
local disk."

  make it enable by copying local files (swap, ephemeral disk, config-
  drive) from the source host to the destination host in
  pre_live_migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472899] Re: bug in ipam driver code

2015-09-17 Thread Armando Migliaccio
** Changed in: neutron
   Status: Expired => Confirmed

** Changed in: neutron
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472899

Title:
  bug in ipam driver code

Status in neutron:
  Incomplete

Bug description:
  http://logs.openstack.org/26/195326/21/check/check-tempest-dsvm-
  networking-
  ovn/c071fed/logs/screen-q-svc.txt.gz?level=TRACE#_2015-07-09_04_45_12_248

  Fails on existing tempest test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493692] Re: Introducing a generic networking-controller project

2015-09-17 Thread vikram.choudhary
** Changed in: networking-onos
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493692

Title:
  Introducing a generic networking-controller project

Status in networking-odl:
  Won't Fix
Status in networking-onos:
  Won't Fix
Status in neutron:
  Won't Fix

Bug description:
  Currently, Neutron has separate big tent projects like networking-odl
  _[1], networking-onos _[2] and so on for making the communication
  between OpenStack and Open Source controller's like ODL _[3] and ONOS
  _[4] possible. After studying the interfaces and code from both the
  repositories, I could find most of the functionalities are identical
  and hence can be unified to single project.

  IMHO, it will be better to have a common project called 'networking-
  controller' which can make the communication between OpenStack and
  standard Open Source Controllers possible.

  Such an approach will
  -> Save time and effort by not working on the same stuffs across   
 different projects.
  -> Maintain all the code in a single repository.
  -> Avoid an introduction of a new project when a new Open Source 
 controller is introduced to Neutron in the future.

  By following this approach the community will get more time in
  developing other stuffs rather than repeating the same stuffs across
  different repositories and avoid extra overhead of maintaining
  different projects.

  _[1] networking-odl project details
       https://github.com/openstack/networking-odl

  _[2] networking-onos project details
       https://github.com/openstack/networking-onos

  _[3] ODL
   http://www.opendaylight.org/

  _[4] ONOS
   http://onosproject.org/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1493692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484113] Re: network topology: Status untranslated

2015-09-17 Thread Zhao Zhe
** Also affects: openstack-i18n
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1484113

Title:
  network topology: Status untranslated

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in openstack i18n:
  New

Bug description:
  on network_topology, the status of an instance or a router is not
  being translated.

  The green/red light left hand side of status is hard wired to the word
  'ACTIVE' (if status == 'ACTIVE', then green light)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1484113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497099] [NEW] Write a test to catch javascript compression error

2015-09-17 Thread Lin Hua Cheng
Public bug reported:

Investigate writing a test to catch javascript compression error earlier
rather than on deploy time.

Issue where a test could have been prevented:
https://bugs.launchpad.net/horizon/+bug/1497029

** Affects: horizon
 Importance: Wishlist
 Status: New


** Tags: low-hanging-fruit

** Changed in: horizon
   Importance: Undecided => Wishlist

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497099

Title:
  Write a test to catch javascript compression error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Investigate writing a test to catch javascript compression error
  earlier rather than on deploy time.

  Issue where a test could have been prevented:
  https://bugs.launchpad.net/horizon/+bug/1497029

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457359] Re: race condition in quick detach/attach to the same volume and vm

2015-09-17 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457359

Title:
  race condition in quick detach/attach to the same volume and vm

Status in OpenStack Compute (nova):
  Expired

Bug description:
  tested on Juno with Cell enabled.

  The race condition happens as follows:
  1. send a detach request to an existing VM with a volume; 
  2. send an attach request to attach the same volume to the same VM 
immediately after #1 in another process.

  Expected result:
  a.  #2 get refused due to #1 is in progress, or
  b. #2 finishes after #1 finished. 

  However race may happen with following sequences:

   Req #1 finished physical action of detach >> 
   Req #1 finished cinder call (setting volume to available) >>  
   Req #2 came into Nova API and got through the call flow since volume is 
available now >> 
   Req #2 ran faster then Req #1 and updated Nova DB BDMs  with volume info >> 
   Req #2 finished and removed the existing volume info in BDMs >> 
   now cinder volume status and nova bdm states went mismatched. The volume 
became inoperable of either attaching or detaching that both operations will be 
refused.

  Also in our test case, child cell nova db and parent cell nova db went
  mismatched since Req #2 passed Req#1 when Req#1 is call updating from
  child cell to parent cell.

  This issue is caused by no guard check against nova bdm table in
  attach process. The suggested fix is to add a volume id check against
  nova bdm table in the beginning of the request to guarantee so that
  for 1 single volume/instance pair, no parallel modification will
  happen.

  The attachment is a slice of logs show the message disorder triggered
  in the test case

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472899] Re: bug in ipam driver code

2015-09-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472899

Title:
  bug in ipam driver code

Status in neutron:
  Expired

Bug description:
  http://logs.openstack.org/26/195326/21/check/check-tempest-dsvm-
  networking-
  ovn/c071fed/logs/screen-q-svc.txt.gz?level=TRACE#_2015-07-09_04_45_12_248

  Fails on existing tempest test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497076] [NEW] Unable to delete an instance due to foreign key violation

2015-09-17 Thread Matthew S
Public bug reported:

I am trying to delete an instance (UUID 78d874ef-a9ec-4ede-9489-ff00903eb7fc), 
but this is failing due to corrupt data in nova's database tables.
I have no idea how this corruption arose, and I don't think it's possible to 
find out in order to reproduce this issue. Therefore, this bug report is not 
about preventing the DB corruption. Instead, it's about making Nova degrade 
more gracefully  in the presence of such corruption.

1. We are running Juno partially upgraded to Kilo (we are mid-way
through an in-place upgrade of Juno to Kilo).

2. Relevant log files and other data will be attached shortly

3. Steps to reproduce

3.1. Install a federated multi-cell OpenStack cloud (NeCTAR)
3.2. Use it for years, progressively upgrading it to successive OpenStack 
releases and meanwhile performing many operations which modify the database, 
one of which was (I suppose) a partially-failed creation of an instance with 
the given UUID
3.3. Attempt to delete the half-existing instance by executing:
nova --debug force-delete 78d874ef-a9ec-4ede-9489-ff00903eb7fc

Expected results:

- All traces of the existence of the given instance shall be removed, apart 
perhaps from historical data, such as records with column 'deleted' non-zero, 
or records in shadow tables, or log files
- The client shall receive a 200 (success) HTTP status from the server

Actual results:

- The remains of the instance (eg DB records) are not removed. I will upload 
partial table dumps.
- The client receives a 404. I will upload console output.
- The nova-cells service attempts to violate a DB foreign key constraint and 
outputs a stack backtrace to its log file, which again I will upload shortly.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497076

Title:
  Unable to delete an instance due to foreign key violation

Status in OpenStack Compute (nova):
  New

Bug description:
  I am trying to delete an instance (UUID 
78d874ef-a9ec-4ede-9489-ff00903eb7fc), but this is failing due to corrupt data 
in nova's database tables.
  I have no idea how this corruption arose, and I don't think it's possible to 
find out in order to reproduce this issue. Therefore, this bug report is not 
about preventing the DB corruption. Instead, it's about making Nova degrade 
more gracefully  in the presence of such corruption.

  1. We are running Juno partially upgraded to Kilo (we are mid-way
  through an in-place upgrade of Juno to Kilo).

  2. Relevant log files and other data will be attached shortly

  3. Steps to reproduce

  3.1. Install a federated multi-cell OpenStack cloud (NeCTAR)
  3.2. Use it for years, progressively upgrading it to successive OpenStack 
releases and meanwhile performing many operations which modify the database, 
one of which was (I suppose) a partially-failed creation of an instance with 
the given UUID
  3.3. Attempt to delete the half-existing instance by executing:
  nova --debug force-delete 78d874ef-a9ec-4ede-9489-ff00903eb7fc

  Expected results:

  - All traces of the existence of the given instance shall be removed, apart 
perhaps from historical data, such as records with column 'deleted' non-zero, 
or records in shadow tables, or log files
  - The client shall receive a 200 (success) HTTP status from the server

  Actual results:

  - The remains of the instance (eg DB records) are not removed. I will upload 
partial table dumps.
  - The client receives a 404. I will upload console output.
  - The nova-cells service attempts to violate a DB foreign key constraint and 
outputs a stack backtrace to its log file, which again I will upload shortly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497074] [NEW] Ignore the ERROR when delete a ipset member

2015-09-17 Thread shihanzhang
Public bug reported:

Now when ovs/lb agent create a ipset sets, it already use '-exist' option, I 
think deleting a ipset member also need this  option,
the option '-exist' in http://ipset.netfilter.org/ipset.man.html  described as 
bellow:

-!, -exist
Ignore errors when exactly the same set is to be created or already added entry 
is added or missing entry is deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497074

Title:
  Ignore the ERROR when delete a ipset member

Status in neutron:
  New

Bug description:
  Now when ovs/lb agent create a ipset sets, it already use '-exist' option, I 
think deleting a ipset member also need this  option,
  the option '-exist' in http://ipset.netfilter.org/ipset.man.html  described 
as bellow:

  -!, -exist
  Ignore errors when exactly the same set is to be created or already added 
entry is added or missing entry is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497066] [NEW] If IP version is not specified while creating Firewall Rule, then it should populate it based on the Source and Destination IP

2015-09-17 Thread Reedip
Public bug reported:

Example:
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient$ neutron 
firewall-rule-create --protocol tcp --action deny --source-ip-address 1::1
Created a new firewall_rule:
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address |  |
| destination_port   |  |
| enabled| True |
| firewall_policy_id |  |
| id | dca8cb81-f65b-4eef-afbe-60d0abb5eecf |
| ip_version | 4|
| name   |  |
| position   |  |
| protocol   | tcp  |
| shared | False|
| source_ip_address  | 1::1 |
| source_port|  |
| tenant_id  | 83bb2407a0fb484581bde56dc1fae293 |
++--+
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient$

On specifying IPv6 source address, the ip_version is populated as IPv4 which is 
not right.
If IP Version is not specified, then in that case IP version should retrieve 
the data from Source/Destination IP.


Need to confirm additional test case:
- If IP version is specified and it does not match the IP version of 
Source/Destination Address then failure should be reported
( if --ip-version is given as 6 and source address is given as 192.168.101.1)

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New

** Project changed: python-neutronclient => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497066

Title:
  If IP version is not specified while creating Firewall Rule, then it
  should populate it based on the Source and Destination IP

Status in neutron:
  New

Bug description:
  Example:
  reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient$ 
neutron firewall-rule-create --protocol tcp --action deny --source-ip-address 
1::1
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | dca8cb81-f65b-4eef-afbe-60d0abb5eecf |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  | 1::1 |
  | source_port|  |
  | tenant_id  | 83bb2407a0fb484581bde56dc1fae293 |
  ++--+
  reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient$

  On specifying IPv6 source address, the ip_version is populated as IPv4 which 
is not right.
  If IP Version is not specified, then in that case IP version should retrieve 
the data from Source/Destination IP.

  
  Need to confirm additional test case:
  - If IP version is specified and it does not match the IP version of 
Source/Destination Address then failure should be reported
  ( if --ip-version is given as 6 and source address is given as 192.168.101.1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497066] [NEW] If IP version is not specified while creating Firewall Rule, then it should populate it based on the Source and Destination IP

2015-09-17 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Example:
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient$ neutron 
firewall-rule-create --protocol tcp --action deny --source-ip-address 1::1
Created a new firewall_rule:
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address |  |
| destination_port   |  |
| enabled| True |
| firewall_policy_id |  |
| id | dca8cb81-f65b-4eef-afbe-60d0abb5eecf |
| ip_version | 4|
| name   |  |
| position   |  |
| protocol   | tcp  |
| shared | False|
| source_ip_address  | 1::1 |
| source_port|  |
| tenant_id  | 83bb2407a0fb484581bde56dc1fae293 |
++--+
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient$ 

On specifying IPv6 source address, the ip_version is populated as IPv4 which is 
not right.
If IP Version is not specified, then in that case IP version should retrieve 
the data from Source/Destination IP

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
If IP version is not specified while creating Firewall Rule, then it should 
populate it based on the Source and Destination IP
https://bugs.launchpad.net/bugs/1497066
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497057] [NEW] DB filtering applies REGEX to non-string columns

2015-09-17 Thread Richard Jones
Public bug reported:

While writing the patch https://review.openstack.org/#/c/224431/ I
noticed that a separate test failed if I did not str() the filter
values. The failing test is
nova.tests.unit.compute.test_compute.ComputeAPITestCase.test_get_all_by_state
and I'll attach the output, but it is also at this test output URL:

http://logs.openstack.org/31/224431/1/check/gate-nova-python27/41a4c00/console.html#_2015-09-17_05_47_20_844
(also at http://paste.openstack.org/show/467079/)

The code is in nova/db/sqlalchemy/api.py in the _regex_instance_filter
function.  It would make sense to move non-string columns off to a
separate filter, rather than applying a REGEX to them.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  While writing the patch https://review.openstack.org/#/c/224431/ I
  noticed that a separate test failed if I did not str() the filter
  values. The failing test is
  nova.tests.unit.compute.test_compute.ComputeAPITestCase.test_get_all_by_state
- and the output of it is below, but also at this test output URL:
+ and I'll attach the output, but it is also at this test output URL:
  
  
http://logs.openstack.org/31/224431/1/check/gate-nova-python27/41a4c00/console.html#_2015-09-17_05_47_20_844
  (also at http://paste.openstack.org/show/467079/)
  
  The code is in nova/db/sqlalchemy/api.py in the _regex_instance_filter
  function.  It would make sense to move non-string columns off to a
  separate filter, rather than applying a REGEX to them.
- 
- 
- 2015-09-17 05:47:20.844 | 
nova.tests.unit.compute.test_compute.ComputeAPITestCase.test_get_all_by_state
- 2015-09-17 05:47:20.844 | 
-
- 2015-09-17 05:47:20.844 | 
- 2015-09-17 05:47:20.844 | Captured pythonlogging:
- 2015-09-17 05:47:20.844 | ~~~
- 2015-09-17 05:47:20.844 | 2015-09-17 05:45:10,858 INFO [nova.virt.driver] 
Loading compute driver 'nova.virt.fake.SmallFakeDriver'
- 2015-09-17 05:47:20.844 | 2015-09-17 05:45:10,858 WARNING 
[nova.compute.monitors] Excluding nova.compute.monitors.cpu monitor 
virt_driver. Not in the list of enabled monitors (CONF.compute_monitors).
- 2015-09-17 05:47:20.844 | 2015-09-17 05:45:10,859 INFO 
[nova.compute.resource_tracker] Auditing locally available compute resources 
for node fakenode1
- 2015-09-17 05:47:20.844 | 2015-09-17 05:45:10,866 WARNING 
[nova.compute.resource_tracker] No compute node record for fake-mini:fakenode1
- 2015-09-17 05:47:20.844 | 2015-09-17 05:45:10,869 INFO 
[nova.compute.resource_tracker] Compute_service record created for 
fake-mini:fakenode1
- 2015-09-17 05:47:20.845 | 2015-09-17 05:45:10,906 INFO 
[nova.compute.resource_tracker] Total usable vcpus: 1, total allocated vcpus: 0
- 2015-09-17 05:47:20.845 | 2015-09-17 05:45:10,906 INFO 
[nova.compute.resource_tracker] Final resource view: name=fakenode1 
phys_ram=8192MB used_ram=512MB phys_disk=1028GB used_disk=0GB total_vcpus=1 
used_vcpus=0 pci_stats=PciDevicePoolList(objects=[])
- 2015-09-17 05:47:20.845 | 2015-09-17 05:45:10,907 INFO 
[nova.compute.resource_tracker] Compute_service record updated for 
fake-mini:fakenode1
- 2015-09-17 05:47:20.845 | 2015-09-17 05:45:10,907 INFO 
[nova.compute.manager] Deleting orphan compute node 2
- 2015-09-17 05:47:20.845 | 
- 2015-09-17 05:47:20.845 | 
- 2015-09-17 05:47:20.845 | Captured traceback:
- 2015-09-17 05:47:20.845 | ~~~
- 2015-09-17 05:47:20.845 | Traceback (most recent call last):
- 2015-09-17 05:47:20.845 |   File 
"nova/tests/unit/compute/test_compute.py", line 8248, in test_get_all_by_state
- 2015-09-17 05:47:20.845 | search_opts={'power_state': 
power_state.SUSPENDED})
- 2015-09-17 05:47:20.846 |   File "nova/compute/api.py", line 2115, in 
get_all
- 2015-09-17 05:47:20.846 | sort_keys=sort_keys, sort_dirs=sort_dirs)
- 2015-09-17 05:47:20.846 |   File "nova/compute/api.py", line 2165, in 
_get_instances_by_filters
- 2015-09-17 05:47:20.846 | expected_attrs=fields, sort_keys=sort_keys, 
sort_dirs=sort_dirs)
- 2015-09-17 05:47:20.846 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 171, in wrapper
- 2015-09-17 05:47:20.846 | result = fn(cls, context, *args, **kwargs)
- 2015-09-17 05:47:20.846 |   File "nova/objects/instance.py", line 1117, 
in get_by_filters
- 2015-09-17 05:47:20.846 | use_slave=use_slave)
- 2015-09-17 05:47:20.846 |   File "nova/db/api.py", line 671, in 
instance_get_all_by_filters
- 2015-09-17 05:47:20.846 | use_slave=use_slave)
- 2015-09-17 05:47:20.846 |   File "nova/db/sqlalchemy/api.py", line 216, 
in wrapper
- 2015-09-17 05:47:20.846 | return f(*args, **kwargs)
- 2015-09-17 05:47:20.847 |   File "nova/db/sqlalchemy/api.py", line 1875, 
in instance_get_all_by_filters
- 2015-09-17 

[Yahoo-eng-team] [Bug 1497054] [NEW] Use `discover_extensions` for novaclient

2015-09-17 Thread Lin Hua Cheng
Public bug reported:

Use `discover_extensions` instead of list_extensions

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497054

Title:
  Use `discover_extensions` for novaclient

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Use `discover_extensions` instead of list_extensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497056] [NEW] Allow configuration of nova micro-version in the settings

2015-09-17 Thread Lin Hua Cheng
Public bug reported:


Right now nova version is hardcoded to version 2 [1], we need to make
that configurable through the OPENSTACK_API_VERSIONS settings.


[1] 
https://github.com/openstack/horizon/commit/27ceba6035c22b7f11e529ddbc287f27568bb8e8#diff-5d48f239612e8fefeec744c1a45887a9

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497056

Title:
  Allow configuration of nova micro-version in the settings

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  Right now nova version is hardcoded to version 2 [1], we need to make
  that configurable through the OPENSTACK_API_VERSIONS settings.

  
  [1] 
https://github.com/openstack/horizon/commit/27ceba6035c22b7f11e529ddbc287f27568bb8e8#diff-5d48f239612e8fefeec744c1a45887a9

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463698] Re: XSS

2015-09-17 Thread Lin Hua Cheng
Amit failed to respond to an important question, if horizon and swift is
running on the same domain.

>From the screenshot, the image is opened using the Swift Public URL
endpoint.

And it seems like Swift  is running on the same domain as horizon,
allowing the script to access the horizon cookie.

The reported bug is invalid for Horizon.

This is more of a deployment issue.

Horizon already documented configuration how to avoid XSS attack in:
https://github.com/openstack/horizon/blob/master/doc/source/topics/deployment.rst

By setting:
CSRF_COOKIE_HTTPONLY = True
SESSION_COOKIE_HTTPONLY = True


** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463698

Title:
  XSS

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Object Storage (swift):
  Invalid

Bug description:
  2.14.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-09-17 Thread Angus Salkeld
** Also affects: heat/kilo
   Importance: Undecided
   Status: New

** Changed in: heat/kilo
   Importance: Undecided => Medium

** Tags removed: in-stable-kilo kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in heat kilo series:
  New
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Won't Fix
Status in Sahara:
  Confirmed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng

[Yahoo-eng-team] [Bug 1484086] Re: ec2tokens authentication is failing during Heat tests

2015-09-17 Thread Angus Salkeld
** Also affects: heat/kilo
   Importance: Undecided
   Status: New

** Changed in: heat/kilo
   Importance: Undecided => Medium

** Tags removed: in-stable-kilo kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1484086

Title:
  ec2tokens authentication is failing during Heat tests

Status in heat:
  Fix Released
Status in heat kilo series:
  New
Status in Keystone:
  Incomplete

Bug description:
  As seen here for example: http://logs.openstack.org/54/194054/37/check
  /gate-heat-dsvm-functional-orig-mysql/a812f55/

  We're getting the error: "Non-default domain is not supported" which
  seems to have been introduced here:
  https://review.openstack.org/#/c/208069/

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1484086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497029] [NEW] newest horizon broken in compressed state

2015-09-17 Thread Kevin Fox
Public bug reported:

In the javascript console, I see the following if I have
COMPRESS_ENABLED=True but not if its COMPRESS_ENABLED=False:

TypeError: ({argument_template:"Name:Description:Mapping Type:http://docs.openstack.org/developer/sahara/userdoc/edp.html for 
definitions.\">Positional 
ArgumentNamed ParameterConfiguration ValueLocation:Value Type:StringNumberData SourceRequired?:Default Value:", job_interface:null, 
argument_ids:null, value_type:null, add_argument_button:null, 
value_type_default:null, current_value_type:(function (){return 
this.value_type.find("option:selected").html();}), 
mark_argument_element_as_wrong:(function (id){$("#"+id).addClass("error");}), 
get_next_argument_id:(function 
 (){var 
max=-1;$(".argument-form").each(function(){max=Math.max(max,parseInt($(this).attr("id_attr")));});return
 max+1;}), set_argument_ids:(function (){var 
ids=[];$(".argument-form").each(function(){var 
id=parseInt($(this).attr("id_attr"));if(!isNaN(id)){ids.push(id);}});this.argument_ids.val(JSON.stringify(ids));}),
 add_argument_node:(function 
(id,name,description,mapping_type,location,value_type,required,default_value){var
 
tmp=this.argument_template.replace(/\$id/g,id).replace(/\$name/g,name).replace(/\$description/g,description).replace(/\$mapping_type/g,mapping_type).replace(/\$location/g,location).replace(/\$value_type/g,value_type).replace(/\$required/g,required).replace(/\$default_value/g,default_value);this.job_interface.find("div:last").after(tmp);this.job_interface.show();this.set_argument_ids();}),
 add_interface_argument:(function (){var 
value_type=this.current_value_type();if(value_type===this.value_type_default){return;}
this.add_argument_node(this.get_next_argument_id(),"","","args","",value_type,true,"");$(".count-field").change();}),
 delete_interface_argument:(function (el){$(el).closest("div").remove();var 
id=this.get_next_argument_id();if(id===0){this.job_interface.hide();}
this.set_argument_ids();}), init_arguments:(function 
(){$("body").tooltip({selector:".help-icon"});this.job_interface=$("#job_interface_arguments");this.argument_ids=$("#argument_ids");this.value_type=$("#value_type");this.add_argument_button=$("#add_argument_button");this.value_type_default=this.current_value_type();this.value_type.change(function(){if(horizon.job_interface_arguments.current_value_type()===this.value_type_default){horizon.job_interface_arguments.add_argument_button.addClass("disabled");}else{horizon.job_interface_arguments.add_argument_button.removeClass("disabled");}});this.job_interface.hide();})})
 is not a function


...ay);}}};horizon.job_interface_arguments={argument_template:''+'https://bugs.launchpad.net/bugs/1497029

Title:
  newest horizon broken in compressed state

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the javascript console, I see the following if I have
  COMPRESS_ENABLED=True but not if its COMPRESS_ENABLED=False:

  TypeError: ({argument_template:"Name:Description:Mapping Type:http://docs.openstack.org/developer/sahara/userdoc/edp.html for 
definitions.\">Positional 
ArgumentNamed ParameterConfiguration ValueLocation:Value Type:StringNumberData SourceRequired?:Default Value:", job_interface:null, 
argument_ids:null, value_type:null, add_argument_button:null, 
value_type_default:null, current_value_type:(function (){return 
this.value_type.find("option:selected").html();}), 
mark_argument_element_as_wrong:(function (id){$("#"+id).addClass("error");}), 
get_next_argument_id:(functio
 n (){var 
max=-1;$(".argument-form").each(function(){max=Math.max(max,parseInt($(this).attr("id_attr")));});return
 max+1;}), set_argument_ids:(function (){var 
ids=[];$(".argument-form").each(function(){var 
id=parseInt($(this).attr("id_attr"));if(!isNaN(id)){ids.push(id);}});this.argument_ids.val(JSON.stringify(ids));}),
 add_argument_node:(function 
(id,name,description,mapping_type,location,value_type,required,default_value){var
 
tmp=this.argument_template.replace(/\$id/g,id).replace(/\$name/g,name).replace(/\$description/g,description).replace(/\$mapping_type/g,mapping_type).replace(/\$location/g,location).replace(/\$value_type/g,value_type).replace(/\$required/g,required).replace(/\$default_value/g,default_value);this.job_interface.find("div:last").after(tmp);this.job_interface.show();this.set_argument_ids();}),
 add_interface_argument:(function (){var 
value_type=this.current_value_type();if(value_type===this.value_type_default){return;}
  
this.add_argument_node(this.get_next_argument_id(),"","","args","",value_type,true,"");$(".count-field").change();}),
 delete_interface_argument:(function (el){$(el).closest("div").remove();var 
id=this.get_next_argument_id();if(id===0){this.job_interface.hide();}
  this.set_argument_ids();}), init_arguments:(function 
(){$("body").tooltip({selector:".help-icon"});this.job_interface=$("#job_interface_arguments");this.argument_ids=$("#argument_ids");this.value_type=$("#value_type");this.add_argument_button=

[Yahoo-eng-team] [Bug 1497027] [NEW] Empty bridges are not removed

2015-09-17 Thread Mathieu Gagné
Public bug reported:

Removal of empty bridges have been disabled in bug #1328546 to fix a
race condition between Nova and Neutron where a bridge would be removed
if the only instance using it is rebooted. This means empty bridges will
pile up over time.

There should be a way to clean them up.

** Affects: neutron
 Importance: Undecided
 Assignee: Mathieu Gagné (mgagne)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497027

Title:
  Empty bridges are not removed

Status in neutron:
  In Progress

Bug description:
  Removal of empty bridges have been disabled in bug #1328546 to fix a
  race condition between Nova and Neutron where a bridge would be
  removed if the only instance using it is rebooted. This means empty
  bridges will pile up over time.

  There should be a way to clean them up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457551] Re: Another Horizon login page vulnerability to a DoS attack

2015-09-17 Thread Nathan Kinder
This has been published as OSSN-0054:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0054

** Changed in: ossn
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1457551

Title:
  Another Horizon login page vulnerability to a DoS attack

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  This bug is very similar to: https://bugs.launchpad.net/bugs/1394370

  Steps to reproduce:
  1) Setup Horizon to use db as session engine (using this doc: 
http://docs.openstack.org/admin-guide-cloud/content/dashboard-session-database.html).
 I've used MySQL.
  2)  Run 'for i in {1..100}; do  curl -b "sessionid=a;" 
http://HORIZON__IP/auth/login/ &> /dev/null; done' from your terminal.
  I've got 100 rows in django_session after this.

  I've used devstack installation just with updated master branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1457551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464750] Re: Service accounts can be used to login horizon

2015-09-17 Thread Nathan Kinder
This has been published as OSSN-0055:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0055

** Changed in: ossn
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464750

Title:
  Service accounts can be used to login horizon

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Compute (nova):
  Incomplete
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  This is not a bug and may / may not be a security issue ... but it
  appears that the service account created in keystone are of the same
  privileges level as any other admin accounts created through keystone
  and I don't like that.

  Would it be possible to implement something that would distinguish
  user accounts from service accounts?  Is there a way to isolate some
  service accounts from the remaining of the openstack APIs?

  One kick example on this is that any service accounts have admin
  privileges on all the other services .   At this point, I'm trying to
  figure out why are we creating a distinct service account for each
  service if nothing isolate them.

  IE:

  glance account can spawn a VM
  cinder account can delete an image
  heat account can delete a volume
  nova account can create an image

  
  All of these service accounts have access to the horizon dashboard.  One 
small hack could be to prevent those accounts from logging in Horizon.

  Thanks,

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496960] Re: webhook reporter posts url encoded data not json data

2015-09-17 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1146-0ubuntu1

---
cloud-init (0.7.7~bzr1146-0ubuntu1) wily; urgency=medium

  * New upstream snapshot.
* make the webhook reporter post json data rather than
  urlencoded data (LP: #1496960)

 -- Scott Moser   Thu, 17 Sep 2015 15:59:35 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1496960

Title:
  webhook reporter posts url encoded data not json data

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  Ricardo was trying to tie reporting into maas and found
  that it is posting things like:
   
timestamp=1442509672.620882&result=SUCCESS&origin=cloudinit&description=running+modules+for+final&event_type=finish&name=modules-final

  rather than posting json data representing that same stuff, as curtin is 
doing.
  Need to change cloud-init to be in line with what curtin is doing and maas 
expects.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.10
  Package: cloud-init 0.7.7~bzr1144-0ubuntu1
  ProcVersionSignature: User Name 4.2.0-7.7-generic 4.2.0
  Uname: Linux 4.2.0-7-generic x86_64
  ApportVersion: 2.18.1-0ubuntu1
  Architecture: amd64
  Date: Thu Sep 17 17:57:03 2015
  Ec2AMI: ami-0589
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: None
  Ec2Ramdisk: None
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1496960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496998] [NEW] fernet token provider is experimental

2015-09-17 Thread Brant Knudson
Public bug reported:


The fernet token provider is experimental. It's not passing the tempest tests 
that all other providers work with.

- The documentation should say that ther fernet provider is experimental
- If the deployer starts keystone with the fernet provider configured a warning 
message should be logged

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496998

Title:
  fernet token provider is experimental

Status in Keystone:
  In Progress

Bug description:
  
  The fernet token provider is experimental. It's not passing the tempest tests 
that all other providers work with.

  - The documentation should say that ther fernet provider is experimental
  - If the deployer starts keystone with the fernet provider configured a 
warning message should be logged

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496960] Re: webhook reporter posts url encoded data not json data

2015-09-17 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1496960

Title:
  webhook reporter posts url encoded data not json data

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Ricardo was trying to tie reporting into maas and found
  that it is posting things like:
   
timestamp=1442509672.620882&result=SUCCESS&origin=cloudinit&description=running+modules+for+final&event_type=finish&name=modules-final

  rather than posting json data representing that same stuff, as curtin is 
doing.
  Need to change cloud-init to be in line with what curtin is doing and maas 
expects.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.10
  Package: cloud-init 0.7.7~bzr1144-0ubuntu1
  ProcVersionSignature: User Name 4.2.0-7.7-generic 4.2.0
  Uname: Linux 4.2.0-7-generic x86_64
  ApportVersion: 2.18.1-0ubuntu1
  Architecture: amd64
  Date: Thu Sep 17 17:57:03 2015
  Ec2AMI: ami-0589
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: None
  Ec2Ramdisk: None
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1496960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496974] [NEW] Improve performance of _get_dvr_sync_data

2015-09-17 Thread Ryan Moats
Public bug reported:

Today, when scheduling a router to a host, _get_dvr_sync_data makes a
call to get all ports on that host.   This causes the time to schedule a
new router to increase as the number of routers on the host increases.

What can we do to improve performance by limiting the number of ports
that we need to return to the agent?

Marked high and kilo-backport-potential because the source problem is
in an existing operator cloud running stable/kilo

** Affects: neutron
 Importance: High
 Status: New


** Tags: kilo-backport-potential l3-dvr-backlog performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496974

Title:
  Improve performance of _get_dvr_sync_data

Status in neutron:
  New

Bug description:
  Today, when scheduling a router to a host, _get_dvr_sync_data makes a
  call to get all ports on that host.   This causes the time to schedule
  a new router to increase as the number of routers on the host
  increases.

  What can we do to improve performance by limiting the number of ports
  that we need to return to the agent?

  Marked high and kilo-backport-potential because the source problem is
  in an existing operator cloud running stable/kilo

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496953] [NEW] Instance status values in CSV summary are not translated

2015-09-17 Thread Tony Dunbar
Public bug reported:

I prepared my environment using the pseudo translation tool to use
German.

When I navigate to the Admin->System->Overview and click on the
"Download CSV Summary" button, the instance status/state values
("Active", "Stopped") in the generated csv file are not translated into
the locale I'm using.  However when I navigate to the instance the
Status values are translated.

You can see in the attached screen shot the translated values from the
instance page and the non-translated values from the csv file.

** Affects: horizon
 Importance: Undecided
 Assignee: Tony Dunbar (adunbar)
 Status: New


** Tags: i18n

** Attachment added: "status.jpg"
   https://bugs.launchpad.net/bugs/1496953/+attachment/4467023/+files/status.jpg

** Changed in: horizon
 Assignee: (unassigned) => Tony Dunbar (adunbar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496953

Title:
  Instance status values in CSV summary are not translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I prepared my environment using the pseudo translation tool to use
  German.

  When I navigate to the Admin->System->Overview and click on the
  "Download CSV Summary" button, the instance status/state values
  ("Active", "Stopped") in the generated csv file are not translated
  into the locale I'm using.  However when I navigate to the instance
  the Status values are translated.

  You can see in the attached screen shot the translated values from the
  instance page and the non-translated values from the csv file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1496953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496946] [NEW] keystone v2 allows creation of is_domain=True projects

2015-09-17 Thread Henrique Truta
Public bug reported:

in keystone v2 controller layer, there is no check if the project has the 
is_domain field set True:
https://github.com/openstack/keystone/blob/master/keystone/resource/controllers.py#L95

keystone v2 must not allow the creation of such projects

** Affects: keystone
 Importance: Undecided
 Assignee: Henrique Truta (henriquetruta)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Henrique Truta (henriquetruta)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496946

Title:
  keystone v2 allows creation of is_domain=True projects

Status in Keystone:
  New

Bug description:
  in keystone v2 controller layer, there is no check if the project has the 
is_domain field set True:
  
https://github.com/openstack/keystone/blob/master/keystone/resource/controllers.py#L95

  keystone v2 must not allow the creation of such projects

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496943] [NEW] v2 does not filter is_domain=True projects in get_project_by_name

2015-09-17 Thread Henrique Truta
Public bug reported:

keystone v2 must not return any project that has is_domain field set True. This 
is not done at get_project_by_name, as in here:
https://github.com/openstack/keystone/blob/master/keystone/resource/controllers.py#L77

** Affects: keystone
 Importance: Undecided
 Assignee: Henrique Truta (henriquetruta)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Henrique Truta (henriquetruta)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496943

Title:
  v2 does not filter is_domain=True projects in get_project_by_name

Status in Keystone:
  New

Bug description:
  keystone v2 must not return any project that has is_domain field set True. 
This is not done at get_project_by_name, as in here:
  
https://github.com/openstack/keystone/blob/master/keystone/resource/controllers.py#L77

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496930] [NEW] instance luanch failed

2015-09-17 Thread El Mehdi
Public bug reported:

Hello, I followed the documentation  " http://docs.openstack.org/kilo
/config-reference/content/vmware.html " to connect ESXi with OpenStack
Juno, i put the following configuration on the compute node, nova.conf
file :

[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver
 
[vmware]
host_ip=
host_username=
host_password=
cluster_name=
datastore_regex=

And in the nova-compute.conf :

[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver


But in vain, on the juno OpenStack Dashboard when i whant to launch instance, i 
have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

attached the logs on the controller and compute node:

==> nova-conductor

ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Traceback (most recent call 
last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", 
line 2054, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2185, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
0c1ee287-edfe-4258-bb43-db23338bbe90 was re-scheduled: Network could not be 
found for bridge br-int\n']
2015-09-17 15:31:34.921 2432 WARNING nova.scheduler.driver 
[req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] NoValidHost exception with message: 'No 
valid host was found.'


=> neutron 
2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] Returning exception More than one 
external network exists to caller
2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/l3_rpc.py", line 
149, in get_external_network_id\nnet_id = 
self.plugin.get_external_network_id(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/db/external_net_db.py", line 161, in 
get_external_network_id\nraise n_exc.TooManyExternalNetworks()\n', 
'TooManyExternalNetworks: More than one ext
 ernal network exists\n']


=>  compute Node / nova-compute

2015-09-17 15:28:22.323 5944 ERROR oslo.vmware.common.loopingcall [-] in fixed 
duration looping call
2015-09-17 15:31:33.550 5944 ERROR nova.compute.manager [-] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] Instance failed to spawn


=> nova-network / nova-compute

2015-09-17 11:21:10.840 1363 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server on ControllerNode01:5672 is unreachable: [Errno 111] ECONNREFUSED. 
Trying again in 3 seconds.
2015-09-17 11:23:02.874 1363 ERROR nova.openstack.common.periodic_task [-] 
Error during VlanManager._disassociate_stale_fixed_ips: Timed out waiting for a 
reply to message ID b6d62061352e4590a37cbc0438ea3ef0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496930

Title:
  instance luanch failed

Status in neutron:
  New

Bug description:
  Hello, I followed the documentation  " http://docs.openstack.org/kilo
  /config-reference/content/vmware.html " to connect ESXi with OpenStack
  Juno, i put the following configuration on the compute node, nova.conf
  file :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver
   
  [vmware]
  host_ip=
  host_username=
  host_password=
  cluster_name=
  datastore_regex=

  And in the nova-compute.conf :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver

  
  But in vain, on the juno OpenStack Dashboard when i whant to launch instance, 
i have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

  attached the logs on the controller and compute node:

  ==> nova-conductor

  ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Traceback (most recent call 
last):\n',

[Yahoo-eng-team] [Bug 1496929] [NEW] instance luanch failed

2015-09-17 Thread El Mehdi
Public bug reported:

Hello, I followed the documentation  " http://docs.openstack.org/kilo
/config-reference/content/vmware.html " to connect ESXi with OpenStack
Juno, i put the following configuration on the compute node, nova.conf
file :

[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver
 
[vmware]
host_ip=
host_username=
host_password=
cluster_name=
datastore_regex=

And in the nova-compute.conf :

[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver


But in vain, on the juno OpenStack Dashboard when i whant to launch instance, i 
have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

attached the logs on the controller and compute node:

==> nova-conductor

ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Traceback (most recent call 
last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", 
line 2054, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2185, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
0c1ee287-edfe-4258-bb43-db23338bbe90 was re-scheduled: Network could not be 
found for bridge br-int\n']
2015-09-17 15:31:34.921 2432 WARNING nova.scheduler.driver 
[req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] NoValidHost exception with message: 'No 
valid host was found.'


=> neutron 
2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] Returning exception More than one 
external network exists to caller
2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/l3_rpc.py", line 
149, in get_external_network_id\nnet_id = 
self.plugin.get_external_network_id(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/db/external_net_db.py", line 161, in 
get_external_network_id\nraise n_exc.TooManyExternalNetworks()\n', 
'TooManyExternalNetworks: More than one ext
 ernal network exists\n']


=>  compute Node / nova-compute

2015-09-17 15:28:22.323 5944 ERROR oslo.vmware.common.loopingcall [-] in fixed 
duration looping call
2015-09-17 15:31:33.550 5944 ERROR nova.compute.manager [-] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] Instance failed to spawn


=> nova-network / nova-compute

2015-09-17 11:21:10.840 1363 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP 
server on ControllerNode01:5672 is unreachable: [Errno 111] ECONNREFUSED. 
Trying again in 3 seconds.
2015-09-17 11:23:02.874 1363 ERROR nova.openstack.common.periodic_task [-] 
Error during VlanManager._disassociate_stale_fixed_ips: Timed out waiting for a 
reply to message ID b6d62061352e4590a37cbc0438ea3ef0

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496929

Title:
  instance luanch failed

Status in OpenStack Compute (nova):
  New

Bug description:
  Hello, I followed the documentation  " http://docs.openstack.org/kilo
  /config-reference/content/vmware.html " to connect ESXi with OpenStack
  Juno, i put the following configuration on the compute node, nova.conf
  file :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver
   
  [vmware]
  host_ip=
  host_username=
  host_password=
  cluster_name=
  datastore_regex=

  And in the nova-compute.conf :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver

  
  But in vain, on the juno OpenStack Dashboard when i whant to launch instance, 
i have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

  attached the logs on the controller and compute node:

  ==> nova-conductor

  ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Tracebac

[Yahoo-eng-team] [Bug 1496932] [NEW] nova.console.websocketproxy fails if there is a cookie with invalid name

2015-09-17 Thread Ivan Mironov
Public bug reported:

If cookie with invalid name (with '?' for example) is passed in the
query, websocketproxy will fail to handle this query. Easiest way to
reproduce:

$ curl 'https://$NOVNCPROXY_HOST:$NOVNCPROXY_PORT/websockify' -H 
'Sec-WebSocket-Version: 13' -H 'Sec-WebSocket-Key: dGVzdAo=' -H 'Upgrade: 
websocket' -H 'Cookie: ?=!' -H 'Connection: Upgrade' -H 
'Sec-WebSocket-Protocol: binary, base64' --compressed
curl: (52) Empty reply from server

This request leads to following message in nova-novncproxy.log:

2015-09-17 18:45:45.443 14494 INFO nova.console.websocketproxy [-]
handler exception: Illegal key value: ?

In real world this may happen when horizon is running on subdomain (e.g.
sub.example.com), while some other "broken" application on parent domain
(e.g. example.com) sets cookie with invalid name.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496932

Title:
  nova.console.websocketproxy fails if there is a cookie with invalid
  name

Status in OpenStack Compute (nova):
  New

Bug description:
  If cookie with invalid name (with '?' for example) is passed in the
  query, websocketproxy will fail to handle this query. Easiest way to
  reproduce:

  $ curl 'https://$NOVNCPROXY_HOST:$NOVNCPROXY_PORT/websockify' -H 
'Sec-WebSocket-Version: 13' -H 'Sec-WebSocket-Key: dGVzdAo=' -H 'Upgrade: 
websocket' -H 'Cookie: ?=!' -H 'Connection: Upgrade' -H 
'Sec-WebSocket-Protocol: binary, base64' --compressed
  curl: (52) Empty reply from server

  This request leads to following message in nova-novncproxy.log:

  2015-09-17 18:45:45.443 14494 INFO nova.console.websocketproxy [-]
  handler exception: Illegal key value: ?

  In real world this may happen when horizon is running on subdomain
  (e.g. sub.example.com), while some other "broken" application on
  parent domain (e.g. example.com) sets cookie with invalid name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494574] Re: Logging missing value types

2015-09-17 Thread Sergey Vilgelm
** No longer affects: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494574

Title:
  Logging missing value types

Status in Cinder:
  Fix Committed
Status in heat:
  Fix Committed
Status in Ironic:
  In Progress
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Committed
Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Committed
Status in os-brick:
  In Progress
Status in oslo.versionedobjects:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  There are a few locations in the code where the log string is missing
  the formatting type, causing log messages to fail.

  
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vnx_cli.py
  
  LOG.debug('EMC: Command Exception: %(rc) %(result)s. 
'  
  FILE: ../OpenStack/cinder/cinder/consistencygroup/api.py  
  
  LOG.error(_LE("CG snapshot %(cgsnap) not found 
when "
  LOG.error(_LE("Source CG %(source_cg) not found 
when "
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vmax_masking.py   
  
  "Storage group %(sgGroupName) "   
  
  FILE: ../OpenStack/cinder/cinder/volume/manager.py
  
  '%(image_id) will not create cache 
entry.'),

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1494574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496911] [NEW] Relieve user from configurating Live Migration type

2015-09-17 Thread Ying Zuo
Public bug reported:

Currently the Live Migration modal requires user to decide the live
migration type (Live or Block) via the "Block Migration" checkbox.
However, only one of this will succeed and the other will fail, and user
does not always care or know the type of live migration it should be.

For better UX, the user should not have to decide the live migration
type with Horizon.

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yinzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yinzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496911

Title:
  Relieve user from configurating Live Migration type

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently the Live Migration modal requires user to decide the live
  migration type (Live or Block) via the "Block Migration" checkbox.
  However, only one of this will succeed and the other will fail, and
  user does not always care or know the type of live migration it should
  be.

  For better UX, the user should not have to decide the live migration
  type with Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1496911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496898] [NEW] prevent extraneous log messages and stdout prints

2015-09-17 Thread Venkatesh Sampath
Public bug reported:

When we run unit tests using './run_tests.sh glance.tests.unit', it
prints quite a lot of unwanted log messages and stdout prints. This
quite badly clutters the whole unit test result output making things
difficult to see whats happening.

Attaching a sample out put of the unit tests.

** Affects: glance
 Importance: Undecided
 Assignee: Venkatesh Sampath (venkateshsampath)
 Status: In Progress


** Tags: glance test unit unittest

** Attachment added: "run_tests_sh_unit_tests_full_console_output.txt"
   
https://bugs.launchpad.net/bugs/1496898/+attachment/4466886/+files/run_tests_sh_unit_tests_full_console_output.txt

** Changed in: glance
 Assignee: (unassigned) => Venkatesh Sampath (venkateshsampath)

** Changed in: glance
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1496898

Title:
  prevent extraneous log messages and stdout prints

Status in Glance:
  In Progress

Bug description:
  When we run unit tests using './run_tests.sh glance.tests.unit', it
  prints quite a lot of unwanted log messages and stdout prints. This
  quite badly clutters the whole unit test result output making things
  difficult to see whats happening.

  Attaching a sample out put of the unit tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1496898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496746] Re: When hugepages is enabled shmmax is not changed

2015-09-17 Thread Andreas Hasenack
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: landscape
   Importance: Undecided
   Status: New

** No longer affects: nova

** Also affects: landscape/cisco-odl
   Importance: Undecided
   Status: New

** Changed in: landscape/cisco-odl
Milestone: None => falkor-0.9

** No longer affects: landscape

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496746

Title:
  When hugepages is enabled shmmax is not changed

Status in Landscape Server cisco-odl series:
  New
Status in nova-compute package in Juju Charms Collection:
  Confirmed

Bug description:
  When enabling hugepages Shared Memory Max must be greator or equal to
  the total size of hugepages.

  For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape/cisco-odl/+bug/1496746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496370] Re: deleted vm data should not be stored in database for ever

2015-09-17 Thread Markus Zoeller (markus_z)
AFAIK this is the way OpenStack does it. I can imagine that this
behavior is useful for audits, for example. Nova provides a way to
archive deleted rows:

nova-manage db archive_deleted_rows --max_rows 1

This has to be applied manually though.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496370

Title:
  deleted vm data should not be stored in database for ever

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  the deleted vm data is no useful if the vm is true deleted maybe immediately 
or sometime later,
  which should not be stored in database for ever.it should be deleted

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496893] [NEW] Unable to modify "IP Allocation pool" for public subnet from Horizon but can do it from CLI

2015-09-17 Thread Deepika
Public bug reported:

Running RedHat OSP7 with Kilo installed on my Nodes. T

he topology consist of :
1. Controller + Network
2. Compute1
3. Compute2

The CLI shows this for the public subnet :

[root@g07-controller-1 ~(keystone_admin)]# neutron subnet-list
+--+--++--+
| id   | name | cidr   | 
allocation_pools |
+--+--++--+
| fefa2447-f5db-4ff6-afa9-a669385cfda4 |  | 10.30.118.0/27 | {"start": 
"10.30.118.10", "end": "10.30.118.20"} |
+--+--++--+
[root@g07-controller-1 ~(keystone_admin)]# 


Now from the CLI I am able to change the pool :

[root@g07-controller-1 ~(keystone_admin)]# neutron subnet-update 
--allocation-pool start=10.30.118.10,end=10.30.118.25 
fefa2447-f5db-4ff6-afa9-a669385cfda4 
Updated subnet: fefa2447-f5db-4ff6-afa9-a669385cfda4
[root@g07-controller-1 ~(keystone_admin)]# neutron subnet-list
+--+--++--+
| id   | name | cidr   | 
allocation_pools |
+--+--++--+
| fefa2447-f5db-4ff6-afa9-a669385cfda4 |  | 10.30.118.0/27 | {"start": 
"10.30.118.10", "end": "10.30.118.25"} |
+--+--++--+


When tried to do the same thing from Horizon, I was able to change and also got 
a success message but when I checked again the pool size had not changed.


Some logs are :

==> /var/log/httpd/horizon_access.log <==
10.131.78.38 - - [17/Sep/2015:07:48:12 -0700] "GET 
/dashboard/admin/networks/d0047805-804b-4a03-942d-b27590d6aef4/detail HTTP/1.1" 
200 4415 
"http://10.30.118.4/dashboard/admin/networks/d0047805-804b-4a03-942d-b27590d6aef4/detail";
 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/600.5.17 (KHTML, 
like Gecko) Version/8.0.5 Safari/600.5.17"
10.131.78.38 - - [17/Sep/2015:07:48:12 -0700] "GET /dashboard/i18n/js/horizon/ 
HTTP/1.1" 200 2372 
"http://10.30.118.4/dashboard/admin/networks/d0047805-804b-4a03-942d-b27590d6aef4/detail";
 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/600.5.17 (KHTML, 
like Gecko) Version/8.0.5 Safari/600.5.17"

==> /var/log/neutron/server.log <==
2015-09-17 07:48:12.409 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60599)
2015-09-17 07:48:12.411 26650 INFO neutron.wsgi 
[req-66292bc0-ac2c-48eb-972f-fe44673775fc ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/extensions.json HTTP/1.1" 200 4806 0.002139
2015-09-17 07:48:12.414 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60600)
2015-09-17 07:48:12.421 26650 INFO neutron.wsgi 
[req-d1473f09-183d-4fd0-8739-0d5a87b7a935 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET 
/v2.0/subnets.json?network_id=d0047805-804b-4a03-942d-b27590d6aef4 HTTP/1.1" 
200 670 0.007126
2015-09-17 07:48:12.423 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60601)
2015-09-17 07:48:12.435 26650 INFO neutron.wsgi 
[req-2fe1133f-d998-41fd-88e4-f90cf97d13f7 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/ports.json?network_id=d0047805-804b-4a03-942d-b27590d6aef4 
HTTP/1.1" 200 1444 0.011954
2015-09-17 07:48:12.437 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60602)
2015-09-17 07:48:12.442 26650 INFO neutron.wsgi 
[req-4e76244a-35b6-46b5-84e4-f845a322f7d8 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET 
/v2.0/networks/d0047805-804b-4a03-942d-b27590d6aef4/dhcp-agents.json HTTP/1.1" 
200 731 0.005297
2015-09-17 07:48:12.444 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60603)
2015-09-17 07:48:12.455 26650 INFO neutron.wsgi 
[req-c89b4555-df61-4c98-9d4f-00a26b4db7fd ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/networks/d0047805-804b-4a03-942d-b27590d6aef4.json 
HTTP/1.1" 200 598 0.011052
2015-09-17 07:48:12.456 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60604)
2015-09-17 07:48:12.465 26650 INFO neutron.wsgi 
[req-a4a36cb9-fb80-47de-a033-c9f7c25b6835 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/subnets/fefa2447-f5db-4ff6-afa9-a669385cfda4.json 
HTTP/1.1" 200 667 0.008141
2015-09-17 07:48:12.506 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60606)
2015-09-17 07:48:12.516 26650 INFO neutron.wsgi 
[req-4470a72a-142b-4a2a-a76a-1dff9b7657da ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/networks/d0047805-804b-4a03-942d-b27590d6aef4.json 
HTTP/1.1" 200 598 0.010351
2015-09-17 07:48:12.518 26650 INFO neutron.wsgi [-] (26650) accepted 
(

[Yahoo-eng-team] [Bug 1496854] [NEW] libvirt: CPU tune bw policy not available in some linux kernels

2015-09-17 Thread sahid
Public bug reported:

In some circumstances mostly related to latency , Linux kernel can have
been build with cgroung configuration CONFIG_CGROUP_SCHED not defined
which makes not possible to boot virtual machines.

We should to verify if that cgroup is well mounted on host;

  by default if nothing has been requested, we can just pass that "cpu
shares" default configuration, if a request has been intended so we
should raise exception to let scheduler tries an other host.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496854

Title:
  libvirt: CPU tune bw policy not available in some linux kernels

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In some circumstances mostly related to latency , Linux kernel can
  have been build with cgroung configuration CONFIG_CGROUP_SCHED not
  defined which makes not possible to boot virtual machines.

  We should to verify if that cgroup is well mounted on host;

by default if nothing has been requested, we can just pass that "cpu
  shares" default configuration, if a request has been intended so we
  should raise exception to let scheduler tries an other host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496873] [NEW] nova-compute leaves an open file descriptor after failed check for direct IO support

2015-09-17 Thread Roman Podoliaka
Public bug reported:

On my Kilo environment I noticed that nova-compute has an open file
descriptor of a deleted file:

nova-compute 14204 nova   21w   REG  252,00
117440706 /var/lib/nova/instances/.directio.test (deleted)

According to logs the check if FS supports direct IO failed:

2015-09-15 22:11:33.171 14204 DEBUG nova.virt.libvirt.driver [req-
f11861ed-bcd8-46cb-8d0b-b7736cce7f80 59d099e0cc1c44e991a02a68dbbb1815
5e6f6da2b2d74a108ccdead3b30f0bcf - - -] Path '/var/lib/nova/instances'
does not support direct I/O: '[Errno 22] Invalid argument'
_supports_direct_io /usr/lib/python2.7/dist-
packages/nova/virt/libvirt/driver.py:2588

Looks like nova-compute doesn't clean up the file descriptors properly,
which means the file will persist, until nova-compute is stopped.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Tags added: libvirt

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496873

Title:
  nova-compute leaves an open file descriptor after failed check for
  direct IO support

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  On my Kilo environment I noticed that nova-compute has an open file
  descriptor of a deleted file:

  nova-compute 14204 nova   21w   REG  252,00
  117440706 /var/lib/nova/instances/.directio.test (deleted)

  According to logs the check if FS supports direct IO failed:

  2015-09-15 22:11:33.171 14204 DEBUG nova.virt.libvirt.driver [req-
  f11861ed-bcd8-46cb-8d0b-b7736cce7f80 59d099e0cc1c44e991a02a68dbbb1815
  5e6f6da2b2d74a108ccdead3b30f0bcf - - -] Path '/var/lib/nova/instances'
  does not support direct I/O: '[Errno 22] Invalid argument'
  _supports_direct_io /usr/lib/python2.7/dist-
  packages/nova/virt/libvirt/driver.py:2588

  Looks like nova-compute doesn't clean up the file descriptors
  properly, which means the file will persist, until nova-compute is
  stopped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488619] Re: Neutron API reports both routers in active state for L3 HA

2015-09-17 Thread Abhishek Chanda
This turned out to be a split brain problem due to intermittence in the
underlying physical network. Rebooting the boxes fixed it.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488619

Title:
  Neutron API reports both routers in active state for L3 HA

Status in neutron:
  Invalid

Bug description:
  I am running Kilo with L3 HA. Here is what I see:

  # neutron --insecure --os-project-domain-name default --os-user-domain-name 
default l3-agent-list-hosting-router test-router
  
+--+-++---+--+
  | id   | host| admin_state_up | 
alive | ha_state |
  
+--+-++---+--+
  | 7dc44513-256a-4d51-b77d-8da6125928ca | one | True   | :-)   | 
active   |
  | c91b437a-e300-4b08-8118-b226ae68cc04 | two | True   | :-)   | 
active   |
  
+--+-++---+--+

  My relevant neutron config on both nodes is
  l3_ha = True
  max_l3_agents_per_router = 2
  min_l3_agents_per_router = 2

  We checked the following:
  1. IP monitor is running on both nodes
  2. Keepalived can talk between the nodes, we see packets on the HA interface

  What are we missing?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496844] [NEW] restart dhcp-agent shouldn't rebuild all dhcp-driver even nothing changed

2015-09-17 Thread ZongKai LI
Public bug reported:

When dhcp-agent is restarted, it will restart all dhcp-drivers even no 
configurations or networks changed.
It's not a big deal in a small scale.
But in a big scale, like a dhcp-agent handles hundreds of networks, it will be 
quite a big cost to rebuild all these dhcp-drivers.

In our environment, a dhcp-agent which has more than 300 networks
binding onto it, will cost more than 2 mins to totally recover to work.
Indeed, nothing changed before we try to restart that dhcp-agent.

It's better to work in a "lazy" mode, like only restart dhcp-drivers
when their configure files need be changed.

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496844

Title:
  restart dhcp-agent shouldn't rebuild all dhcp-driver even nothing
  changed

Status in neutron:
  New

Bug description:
  When dhcp-agent is restarted, it will restart all dhcp-drivers even no 
configurations or networks changed.
  It's not a big deal in a small scale.
  But in a big scale, like a dhcp-agent handles hundreds of networks, it will 
be quite a big cost to rebuild all these dhcp-drivers.

  In our environment, a dhcp-agent which has more than 300 networks
  binding onto it, will cost more than 2 mins to totally recover to
  work. Indeed, nothing changed before we try to restart that dhcp-
  agent.

  It's better to work in a "lazy" mode, like only restart dhcp-drivers
  when their configure files need be changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496650] Re: requirement conflict on Babel

2015-09-17 Thread Tony Breeds
Fixed with: https://review.openstack.org/224429 which has now merged.

** Changed in: oslo.utils
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496650

Title:
  requirement conflict on Babel

Status in Keystone:
  Invalid
Status in oslo.utils:
  Fix Released

Bug description:
  message:"pkg_resources.ContextualVersionConflict: (Babel 2.0
  (/usr/local/lib/python2.7/dist-packages),
  Requirement.parse('Babel<=1.3,>=1.3'), set(['oslo.utils']))"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicGtnX3Jlc291cmNlcy5Db250ZXh0dWFsVmVyc2lvbkNvbmZsaWN0OiAoQmFiZWwgMi4wICgvdXNyL2xvY2FsL2xpYi9weXRob24yLjcvZGlzdC1wYWNrYWdlcyksIFJlcXVpcmVtZW50LnBhcnNlKCdCYWJlbDw9MS4zLD49MS4zJyksIHNldChbJ29zbG8udXRpbHMnXSkpXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDI0NTE2OTc3ODl9

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496813] [NEW] l3 ha network is not steady

2015-09-17 Thread deng kailang
Public bug reported:

HA network often becomes 'unpingable' and leads to lose vrrp
advirtisement from other l3 agent. Therefore there may exists multiple
master l3 agents.

Reproducing steps:
In a HA network, make a ping between ha-xxx interfaces in two qrouter 
namespace. It will become 'unpingable' every once in a while.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496813

Title:
  l3 ha network is not steady

Status in neutron:
  New

Bug description:
  HA network often becomes 'unpingable' and leads to lose vrrp
  advirtisement from other l3 agent. Therefore there may exists multiple
  master l3 agents.

  Reproducing steps:
  In a HA network, make a ping between ha-xxx interfaces in two qrouter 
namespace. It will become 'unpingable' every once in a while.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496802] [NEW] stop duplicating schema for common attributes

2015-09-17 Thread Kevin Benton
Public bug reported:

Several features have come up that can apply to multiple types of
objects (qos, port security enabled, rbac, timestamps, tags) and each
time we implement them we either duplicate schema across a bunch of
tables or we have a single table with no referential integrity.

We should add a new table that all of the Neutron resources relate to
and then have new features that apply to multiple object types relate to
the new neutron resources table. This prevents duplication of schema
while maintaining referential integrity.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496802

Title:
  stop duplicating schema for common attributes

Status in neutron:
  New

Bug description:
  Several features have come up that can apply to multiple types of
  objects (qos, port security enabled, rbac, timestamps, tags) and each
  time we implement them we either duplicate schema across a bunch of
  tables or we have a single table with no referential integrity.

  We should add a new table that all of the Neutron resources relate to
  and then have new features that apply to multiple object types relate
  to the new neutron resources table. This prevents duplication of
  schema while maintaining referential integrity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489442] Re: Invalid order of volumes with adding a volume in boot operation

2015-09-17 Thread Nikola Đipanov
Moving this to Invalid - but please feel free to move back if you
disagree.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489442

Title:
  Invalid order of volumes with adding a volume in boot operation

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If an image has several volume in bdm, and a user adds one more volume
  for boot operation, then the new volume is not just added to a volume
  list, but becomes the second device. This can lead to problems if the
  image root device has various soft which settings point to other
  volumes.

  For example:
  1 the image is a snapshot of a volume backed instance which had vda and vdb 
volumes
  2 the instance had an sql server, which used both vda and vdb for its database
  3 if a user runs a new instance from the image, either device names are 
restored (with xen), or they're reassigned (libvirt) to the same names, because 
the order of devices, which are passed in libvirt, is the same as it was for 
the original instance
  4 if a user runs a new instance, adding a new volume, the volume list becomes 
vda, new, vdb
  5 in this case libvirt reassings device names to vda=vda, new=vdb, vdb=vdc
  6 as a result the sql server will not find its data on vdb

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496787] [NEW] If qos service_plugin is enabled, but ml2 extension driver is not, api requests attaching policies to ports or nets will fail with an ugly exception

2015-09-17 Thread Miguel Angel Ajo
Public bug reported:

$ neutron port-update b0885ae1-487b-40bc-8fc0-32432a21e39d --qos-policy 
bw-limiter
Request Failed: internal server error while processing your request.

Neutron Exception:

DEBUG neutron.api.v2.base [req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] Request body: {u'port': {u'qos_policy_id': 
u'0ee1c673-5671-40ca-b55f-4cd4bbd999c7'}} from (pid=18237) prepare_request_body 
/opt/stack/neutron/neutron/api/v2/base.py:645
2015-09-15 01:05:26.022 ERROR neutron.api.v2.resource 
[req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] update failed
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 613, in update
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1158, in update_port
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource 
original_port[qos_consts.QOS_POLICY_ID] !=
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource KeyError: 'qos_policy_id'
2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource
2015-09-15 01:05:26.026 INFO neutron.wsgi 
[req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] 172.16.175.128 - - [15/Sep/2015 01:05:26] 
"PUT /v2.0/ports/b0885ae1-487b-40bc-8fc0-32432a21e39d.json HTTP/1.1" 500 383 
0.084317

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Confirmed


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

** Changed in: neutron
   Status: New => Confirmed

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496787

Title:
  If qos service_plugin is enabled, but ml2 extension driver is not, api
  requests attaching policies to ports or nets will fail with an ugly
  exception

Status in neutron:
  Confirmed

Bug description:
  $ neutron port-update b0885ae1-487b-40bc-8fc0-32432a21e39d --qos-policy 
bw-limiter
  Request Failed: internal server error while processing your request.

  Neutron Exception:

  DEBUG neutron.api.v2.base [req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] Request body: {u'port': {u'qos_policy_id': 
u'0ee1c673-5671-40ca-b55f-4cd4bbd999c7'}} from (pid=18237) prepare_request_body 
/opt/stack/neutron/neutron/api/v2/base.py:645
  2015-09-15 01:05:26.022 ERROR neutron.api.v2.resource 
[req-218cddfd-2b7d-4050-91db-251c139029b2 admin 
85b859134de2428d94f6ee910dc545d8] update failed
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-09-15 01:05:26.022 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 613, in update
  2015-09-15 01:0

[Yahoo-eng-team] [Bug 1496774] [NEW] metadata-agent fills up log-files with " TypeError: 'NoneType' object has no attribute '__getitem__'"

2015-09-17 Thread L00nix
Public bug reported:

Sience Monday, the log file of metadata-agent are getting full with the
following error / trace:

2015-09-17 11:06:00.187 8277 ERROR oslo_messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: 'NoneType' object has no attribute 
'__getitem__'
2015-09-17 11:06:00.187 8276 ERROR oslo_messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: 'NoneType' object has no attribute 
'__getitem__'
2015-09-17 11:06:00.186 8267 ERROR oslo_messaging._drivers.amqpdriver [-] 
Failed to process incoming message, retrying...
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver Traceback 
(most recent call last):
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
197, in poll
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
self.conn.consume(limit=1)
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1172, in consume
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
six.next(it)
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1083, in iterconsume
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
error_callback=_error_callback)
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
870, in ensure
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver ret, 
channel = autoretry_method()
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 453, in _ensured
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
return fun(*args, **kwargs)
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 520, in __call__^C
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
self.revive(create_channel())
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 251, in channel
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver chan 
= self.transport.create_channel(self.connection)^C
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 91, in 
create_channel
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
return connection.channel()
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 279, in channel
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
return self.channels[channel_id]
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
TypeError: 'NoneType' object has no attribute '__getitem__'
2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver

There was no change in the configuration before, i'm not exactly sure
why it does that, the configuration looks fine and all good.

I have also a 100% load on the server.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: metadata typeerror

** Description changed:

  Sience Monday, the log file of metadata-agent are getting full with the
  following error / trace:
  
  2015-09-17 11:06:00.187 8277 ERROR oslo_messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: 'NoneType' object has no attribute 
'__getitem__'
  2015-09-17 11:06:00.187 8276 ERROR oslo_messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: 'NoneType' object has no attribute 
'__getitem__'
  2015-09-17 11:06:00.186 8267 ERROR oslo_messaging._drivers.amqpdriver [-] 
Failed to process incoming message, retrying...
  2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
Traceback (most recent call last):
  2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
197, in poll
  2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
self.conn.consume(limit=1)
  2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1172, in consume
  2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver 
six.next(it)
  2015-09-17 11:06:00.186 8267 TRACE oslo_messaging._drivers.amqpdriver   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1

[Yahoo-eng-team] [Bug 1493026] Re: location-add return error when add new location to 'queued' image

2015-09-17 Thread wangxiyuan
don't have any good way to solve it now. need discussion.

** Changed in: glance
   Status: In Progress => Opinion

** Changed in: glance
 Assignee: wangxiyuan (wangxiyuan) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1493026

Title:
  location-add return error when add new location to 'queued' image

Status in Glance:
  Opinion

Bug description:
  Reproduce:

  1. create a new image:
  glance image-create --disk-format qcow2 --container-format bare --name test

  suppose the image'id is 1

  2.add location to the image:

  glance location-add 1 --url 

  Result :  the client raise an error:'The administrator has disabled
  API access to image locations'.

  But when use REST API to reproduce the step 2, it runs well and the image's 
status will be changed into 'active'.
  According to the code: 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L735-L750
  I think we should add check in glance like client does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1493026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495884] Re: image's backend file was deleted while it was still being use.

2015-09-17 Thread wangxiyuan
https://blueprints.launchpad.net/glance/+spec/add-location-manage-
mechanism

** Changed in: glance
   Status: New => Invalid

** Changed in: glance
 Assignee: wangxiyuan (wangxiyuan) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1495884

Title:
  image's backend file was deleted while it was still being use.

Status in Glance:
  Invalid

Bug description:
  Reproduce:

  1.create an image A, add backend 'X'  to it's location .

  2.create another image B, add the same backend 'X' to it's location.

  3.show the two image, their status are both 'active'.

  4.delete image A.  After this setep, the backend X will be deleted as
  well.

  5. show the image B. Its status is still 'active'. Obviously, image
  B's backend file  'X' has been deleted, So B can't be use anymore.

  So IMHO, before we delete the backend file, we should check that
  whether the file is being use. If yes, we should not delete it
  directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1495884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486178] Re: Boot from image (creates a new volume) Doesn't allow specification of volume-type

2015-09-17 Thread Soumitra Karmakar
** Changed in: horizon
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1486178

Title:
  Boot from image (creates a new volume) Doesn't allow specification of
  volume-type

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon has a cool feature that wrap cinder create-volume from image
  and novas boot from volume all up into a single command under launch
  instance.  The only missing thing here is the ability to specify
  volume-type when doing this.  There should probably be a follow up
  that let's a user specify the cinder volume-type when using this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1486178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495876] Re: nova.conf - configuration options in OpenStack Configuration Reference  - kilo

2015-09-17 Thread Markus Zoeller (markus_z)
As Shuquan Huang said in comment #2, this has to be fixed in nova =>
Invalid for "openstack-manuals".

** Tags added: config-options documentation

** Changed in: openstack-manuals
   Status: In Progress => Invalid

** Changed in: openstack-manuals
 Assignee: Shuquan Huang (shuquan) => (unassigned)

** Tags added: low-hanging-fruit

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495876

Title:
  nova.conf - configuration options in OpenStack Configuration Reference
  - kilo

Status in OpenStack Compute (nova):
  In Progress
Status in openstack-manuals:
  Invalid

Bug description:
  
  ---
  Built: 2015-08-27T08:45:20 00:00
  git SHA: f062eb42bbc512386ac572b5b830fb4e21c72a41
  URL: 
http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/compute/section_compute-options-reference.xml
  xml:id: list-of-compute-config-options

  
  iscsi_use_multipath = False   (BoolOpt) Use multipath connection of the iSCSI 
volume

  
  above description is incorrect and very misleading

  actually, this option is applicable for both FC/iSCSI volumes

  
  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496725] [NEW] Trace when display event tabs in heat section

2015-09-17 Thread Kevin Tibi
Public bug reported:

python-heatclient-0.6.0-1.el7.noarch
python-django-appconf-0.6-1.el7.noarch
python-django-1.8.3-1.el7.noarch
python-django-compressor-1.4-3.el7.noarch
python-django-horizon-2015.1.0-7.el7.noarch
python-django-bash-completion-1.8.3-1.el7.noarch
python-django-pyscss-1.0.5-2.el7.noarch
python-django-openstack-auth-1.2.0-4.el7.noarch


2015-09-17 07:31:13,120 10568 ERROR django.request Internal Server Error: 
/dashboard/project/stacks/stack/ff039c79-26cf-4a74-b34a-b059c678e795/
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
71, in view
return self.dispatch(request, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
89, in dispatch
return handler(request, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/horizon/tabs/views.py", line 72, in get
return self.handle_tabbed_response(context["tab_group"], context)
  File "/usr/lib/python2.7/site-packages/horizon/tabs/views.py", line 65, in 
handle_tabbed_response
return http.HttpResponse(tab_group.selected.render())
  File "/usr/lib/python2.7/site-packages/horizon/tabs/base.py", line 323, in 
render
return render_to_string(self.get_template_name(self.request), context)
  File "/usr/lib/python2.7/site-packages/django/template/loader.py", line 99, 
in render_to_string
return template.render(context, request)
  File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
return self.template.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 209, in 
render
return self._render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 201, in 
_render
return self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 89, in 
render
output = self.filter_expression.resolve(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 647, in 
resolve
obj = self.var.resolve(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 787, in 
resolve
value = self._resolve_lookup(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 847, in 
_resolve_lookup
current = current()
  File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1276, in 
render
return table_template.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
return self.template.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 209, in 
render
return self._render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 201, in 
_render
return self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 
576, in render
return self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 
576, in render
return self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", line 
56, in render
result = self.nodelist.render(context)
  File "/usr/lib/python2.

[Yahoo-eng-team] [Bug 1496705] [NEW] RFE: a common description field for Neutron resources

2015-09-17 Thread Li Ma
Public bug reported:

The user story is: Users can see human-readable security group
descriptions in Horizon because the security group model contains a
description field, but no such field exists for security group rules.
This makes it very confusing for users who have to manage complex
security groups.

I agree that one could encode descriptions as tags, but the problem I
see is that API consumers(Horizon, Users) would have to agree on some
common encoding.  For example... To expose a security group rule
description in Horizon, horizon would have to apply and read tags like
'description:SSH Access for Mallory'.

With a tags-based implementation, if a user wants the description for a
security group rule via the API, they have to get the security group,
then filter the tags according to whatever format horizon chose to
encode the description as.

This is in contrast to getting the description of a security group: Get
the security group and access the description attribute.

I think that resource tags are great, but this seems like a
non-intuitive workaround for a specific data model problem: Security
Groups have descriptions, but Security Group Rules do not.

A discussion is under way in the mailing list:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074046.html

** Affects: neutron
 Importance: Undecided
 Assignee: Li Ma (nick-ma-z)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Li Ma (nick-ma-z)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496705

Title:
  RFE: a common description field for Neutron resources

Status in neutron:
  New

Bug description:
  The user story is: Users can see human-readable security group
  descriptions in Horizon because the security group model contains a
  description field, but no such field exists for security group rules.
  This makes it very confusing for users who have to manage complex
  security groups.

  I agree that one could encode descriptions as tags, but the problem I
  see is that API consumers(Horizon, Users) would have to agree on some
  common encoding.  For example... To expose a security group rule
  description in Horizon, horizon would have to apply and read tags like
  'description:SSH Access for Mallory'.

  With a tags-based implementation, if a user wants the description for a
  security group rule via the API, they have to get the security group,
  then filter the tags according to whatever format horizon chose to
  encode the description as.

  This is in contrast to getting the description of a security group: Get
  the security group and access the description attribute.

  I think that resource tags are great, but this seems like a
  non-intuitive workaround for a specific data model problem: Security
  Groups have descriptions, but Security Group Rules do not.

  A discussion is under way in the mailing list:
  http://lists.openstack.org/pipermail/openstack-dev/2015-September/074046.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp