[Yahoo-eng-team] [Bug 1332428] [NEW] ovs agent references bridge mac before creation

2014-06-20 Thread Kevin Benton
Public bug reported:

The OVS agent references the mac address of the integration bridge
before the code is run to verify that it exists.

** Affects: neutron
 Importance: Undecided
 Status: Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332428

Title:
  ovs agent references bridge mac before creation

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The OVS agent references the mac address of the integration bridge
  before the code is run to verify that it exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332450] [NEW] br-tun lost ports/flows if openvswitch restart

2014-06-20 Thread Chengli Xu
Public bug reported:

When openvswitch restart, ovs agent will reset br-tun,  lose all tunnel network 
related ports/flows, and break all tunnel networks.
If l2 population used, We could maintain all l2 population fdb entries locally 
and recreate ports/flows . if not, set tunnel_sync = True works.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332450

Title:
  br-tun lost ports/flows if openvswitch restart

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When openvswitch restart, ovs agent will reset br-tun,  lose all tunnel 
network related ports/flows, and break all tunnel networks.
  If l2 population used, We could maintain all l2 population fdb entries 
locally and recreate ports/flows . if not, set tunnel_sync = True works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331271] Re: NoReverseMatch: u"'horizon" is not a registered namespace

2014-06-20 Thread Julie Pichon
Eep! I'm sorry Hu, I read Matt's comments too quickly and thought he was
also the person who filed the bug. Thank you for the heads-up (and for
filing the bug in the first place!), I'll remove the duplicate
relationship.

** This bug is no longer a duplicate of bug 131
   Horizon error on accessing psuedo-folder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1331271

Title:
  NoReverseMatch: u"'horizon" is not a registered namespace

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
   CentOS 6.5 with Icehouse OpenStack controller running latest stable yum  
version of horizon.  when I click on the pseudo-folder of container, Following 
error will display:
  NoReverseMatch: u"'horizon" is not a registered namespace

  simular problem with
   https://bugs.launchpad.net/horizon/+bug/131
  https://bugs.launchpad.net/horizon/+bug/1296273

  but impact source file is :
  
/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/containers/templates/containers/index.html

  Update index.html lline 16&21 as following, then the pseudo-folder could be 
displayed correctly:
  line 16, change it  from 
  {{ container_name }} : /
  to
  {{ 
container_name }} : /

  line 21,change it from 
  {{ folder.0 }} /
  to 
  {{ folder.0 }} /

  
  BTW: The pseudo-folder couldn't be delete which is related to another exists 
bug:"
 https://bugs.launchpad.net/horizon/+bug/1317016

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1331271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308918] Re: If both kystone token and session have timeed out, a user is invited to login twice in a row

2014-06-20 Thread Yves-Gwenael Bourhis
** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: django-openstack-auth
 Assignee: (unassigned) => Yves-Gwenael Bourhis (yves-gwenael-bourhis)

** Changed in: django-openstack-auth
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1308918

Title:
  If both kystone token and session have timeed out, a user is invited
  to login twice in a row

Status in Django OpenStack Auth:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If both the keyston token and the session expired, the user is asked to log 
in twice.
  This is because the session timestamp is written only if a user is logged 
authenticated.
  When a user has timed out both in session and keystone token validity, the 
user is asked to log in, then the timestamp is checked, and the user loged out 
again and asked to log in a second time.

  Steps to reproduce:
  
  - set in /etc/kestone/keystone.conf under the [token] section::

  expiration=10

  - set in openstack_dashboard/local/local_settings set::

  SESSION_TIMEOUT = 10

  - wait for both session and token to timeout (> 10 seconds :-) )

  You are asked to login twice in a row.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1308918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308918] [NEW] If both kystone token and session have timeed out, a user is invited to login twice in a row

2014-06-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

If both the keyston token and the session expired, the user is asked to log in 
twice.
This is because the session timestamp is written only if a user is logged 
authenticated.
When a user has timed out both in session and keystone token validity, the user 
is asked to log in, then the timestamp is checked, and the user loged out again 
and asked to log in a second time.

Steps to reproduce:

- set in /etc/kestone/keystone.conf under the [token] section::

expiration=10

- set in openstack_dashboard/local/local_settings set::

SESSION_TIMEOUT = 10

- wait for both session and token to timeout (> 10 seconds :-) )

You are asked to login twice in a row.

** Affects: horizon
 Importance: Undecided
 Assignee: Yves-Gwenael Bourhis (yves-gwenael-bourhis)
 Status: In Progress

-- 
If both kystone token and session have timeed out, a user is invited to login 
twice in a row
https://bugs.launchpad.net/bugs/1308918
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332475] [NEW] neutron should give an error if we give Segmentation_id beyond specified range

2014-06-20 Thread sandeep mane
Public bug reported:


Problem in segment-Id range, it should belongs to given range,
command to reproduce :

neutron net-create demo_net --provider:network_type gre
--provider:Segmentation_id 2000

right now its allowing to create.

expected:
neutron should give an error if we give Segmentation_id beyond specified range.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332475

Title:
  neutron should give an error if we give Segmentation_id beyond
  specified range

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  Problem in segment-Id range, it should belongs to given range,
  command to reproduce :

  neutron net-create demo_net --provider:network_type gre
  --provider:Segmentation_id 2000

  right now its allowing to create.

  expected:
  neutron should give an error if we give Segmentation_id beyond specified 
range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332478] [NEW] ofagent is broken after oslo.messaging changes

2014-06-20 Thread YAMAMOTO Takashi
Public bug reported:

it needs to call rpc.init.

ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call 
last):
  File "/opt/stack/ryu/ryu/lib/hub.py", line 52, in _launch
func(*args, **kwargs)
  File "/opt/stack/neutron/neutron/plugins/ofagent/agent/ofa_neutron_agent.py", 
line 155, in _agent_main
agent = OFANeutronAgent(ryuapp, **agent_config)
  File "/opt/stack/neutron/neutron/plugins/ofagent/agent/ofa_neutron_agent.py", 
line 228, in __init__
self.setup_rpc()
  File "/opt/stack/neutron/neutron/plugins/ofagent/agent/ofa_neutron_agent.py", 
line 283, in setup_rpc
self.plugin_rpc = OFAPluginApi(topics.PLUGIN)
  File "/opt/stack/neutron/neutron/agent/rpc.py", line 88, in __init__
topic=topic, default_version=self.BASE_RPC_API_VERSION)  File 
"/opt/stack/neutron/neutron/common/rpc_compat.py", line 39, in __init__
self._client = n_rpc.get_client(target, version_cap=version_cap)
  File "/opt/stack/neutron/neutron/common/rpc.py", line 81, in get_client
assert TRANSPORT is not NoneAssertionError

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress


** Tags: openflowagent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332478

Title:
  ofagent is broken after oslo.messaging changes

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  it needs to call rpc.init.

  ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call 
last):
File "/opt/stack/ryu/ryu/lib/hub.py", line 52, in _launch
  func(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ofagent/agent/ofa_neutron_agent.py", line 
155, in _agent_main
  agent = OFANeutronAgent(ryuapp, **agent_config)
File 
"/opt/stack/neutron/neutron/plugins/ofagent/agent/ofa_neutron_agent.py", line 
228, in __init__
  self.setup_rpc()
File 
"/opt/stack/neutron/neutron/plugins/ofagent/agent/ofa_neutron_agent.py", line 
283, in setup_rpc
  self.plugin_rpc = OFAPluginApi(topics.PLUGIN)
File "/opt/stack/neutron/neutron/agent/rpc.py", line 88, in __init__
topic=topic, default_version=self.BASE_RPC_API_VERSION)  File 
"/opt/stack/neutron/neutron/common/rpc_compat.py", line 39, in __init__
self._client = n_rpc.get_client(target, version_cap=version_cap)
File "/opt/stack/neutron/neutron/common/rpc.py", line 81, in get_client
assert TRANSPORT is not NoneAssertionError

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332481] [NEW] Creating and deleting the vm port multiple times the allocated ips are not used automatically

2014-06-20 Thread Ashish Kumar Gupta
Public bug reported:

DESCRIPTION: Creating and deleting the vm port multiple times the
allocated ips are not reused while spawning the new vm port

Steps to Reproduce: 
1. Create a network. 
2. Create a subnet 10.10.1.0/24
3. Spawned a vm make sure vm got the ip.  (say 10.10.1.2)
4. Delete the vm 
5. Spawned another vm make sure its gets the ip .
6. Make sure in the port list the previous allocated ip is not listed.

Actual Results: VM get the next  ip (10.10.1.3 ) not the ip that got
released.

Expected Results: VM should get the relased ips.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332481

Title:
  Creating and deleting the vm port multiple times the allocated ips are
  not used automatically

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  DESCRIPTION: Creating and deleting the vm port multiple times the
  allocated ips are not reused while spawning the new vm port

  Steps to Reproduce: 
  1. Create a network. 
  2. Create a subnet 10.10.1.0/24
  3. Spawned a vm make sure vm got the ip.  (say 10.10.1.2)
  4. Delete the vm 
  5. Spawned another vm make sure its gets the ip .
  6. Make sure in the port list the previous allocated ip is not listed.

  Actual Results: VM get the next  ip (10.10.1.3 ) not the ip that got
  released.

  Expected Results: VM should get the relased ips.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332481] Re: Creating and deleting the vm port multiple times the allocated ips are not used automatically

2014-06-20 Thread Rossella Sblendido
This is the expected behavior. This change https://review.openstack.org/58017 
modified the way IP are recycled.
Instead of being recycled immediately after the release of a port, the complex 
operation of rebuilding the availability table is performed  when the table is 
exhausted

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332481

Title:
  Creating and deleting the vm port multiple times the allocated ips are
  not used automatically

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  DESCRIPTION: Creating and deleting the vm port multiple times the
  allocated ips are not reused while spawning the new vm port

  Steps to Reproduce: 
  1. Create a network. 
  2. Create a subnet 10.10.1.0/24
  3. Spawned a vm make sure vm got the ip.  (say 10.10.1.2)
  4. Delete the vm 
  5. Spawned another vm make sure its gets the ip .
  6. Make sure in the port list the previous allocated ip is not listed.

  Actual Results: VM get the next  ip (10.10.1.3 ) not the ip that got
  released.

  Expected Results: VM should get the relased ips.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332500] [NEW] Lock wait timeout inserting row into routers table

2014-06-20 Thread Eugene Nikanorov
Public bug reported:

Another instance of the famous bug:

http://logs.openstack.org/87/99187/1/check/check-tempest-dsvm-neutron-
full/978727a/logs/screen-q-svc.txt.gz?level=TRACE#_2014-06-20_03_10_37_447

OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded;
try restarting transaction') 'INSERT INTO routers (tenant_id, id, name,
status, admin_state_up, gw_port_id, enable_snat) VALUES (%s, %s, %s, %s,
%s, %s, %s)' ('91f1077240284b0a85e9bb8a02712926', '04003809-f94c-42ef-
ad5a-d84dd0b0a086', 'AttachVolumeV3Test-488061603-router', 'ACTIVE', 1,
None, 1)

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed


** Tags: db gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332500

Title:
  Lock wait timeout inserting row into routers table

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Another instance of the famous bug:

  http://logs.openstack.org/87/99187/1/check/check-tempest-dsvm-neutron-
  full/978727a/logs/screen-q-svc.txt.gz?level=TRACE#_2014-06-20_03_10_37_447

  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'INSERT INTO routers
  (tenant_id, id, name, status, admin_state_up, gw_port_id, enable_snat)
  VALUES (%s, %s, %s, %s, %s, %s, %s)'
  ('91f1077240284b0a85e9bb8a02712926', '04003809-f94c-42ef-ad5a-
  d84dd0b0a086', 'AttachVolumeV3Test-488061603-router', 'ACTIVE', 1,
  None, 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332502] [NEW] Intermittent UT failure for VMware 'adv' plugin

2014-06-20 Thread Salvatore Orlando
Public bug reported:

Failure occurs in
neutron.tests.unit.vmware.vshield.test_vpnaas_plugin.TestVpnPlugin.test_create_vpnservice_with_invalid_route

logstash query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTFwiIEFORCBtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy51bml0LnZtd2FyZS52c2hpZWxkLnRlc3RfdnBuYWFzX3BsdWdpbi5UZXN0VnBuUGx1Z2luLnRlc3RfY3JlYXRlX3ZwbnNlcnZpY2Vfd2l0aF9pbnZhbGlkX3JvdXRlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzMjYzNzAzNzI4fQ==

Error log:
http://logs.openstack.org/47/101447/3/check/gate-neutron-python26/69da3af/console.html

Introduced by: offending patch not yet known - could be a latent problem
accidentally uncovered by other patches.

Hits in past 7 days: 9 (1 in gate queue)


setting priority as high as for anything affecting gate stability.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: gate-failure vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332502

Title:
  Intermittent UT failure for VMware 'adv' plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Failure occurs in
  
neutron.tests.unit.vmware.vshield.test_vpnaas_plugin.TestVpnPlugin.test_create_vpnservice_with_invalid_route

  logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTFwiIEFORCBtZXNzYWdlOlwibmV1dHJvbi50ZXN0cy51bml0LnZtd2FyZS52c2hpZWxkLnRlc3RfdnBuYWFzX3BsdWdpbi5UZXN0VnBuUGx1Z2luLnRlc3RfY3JlYXRlX3ZwbnNlcnZpY2Vfd2l0aF9pbnZhbGlkX3JvdXRlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzMjYzNzAzNzI4fQ==

  Error log:
  
http://logs.openstack.org/47/101447/3/check/gate-neutron-python26/69da3af/console.html

  Introduced by: offending patch not yet known - could be a latent
  problem accidentally uncovered by other patches.

  Hits in past 7 days: 9 (1 in gate queue)

  
  setting priority as high as for anything affecting gate stability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313746] Re: Non-admins can create public images

2014-06-20 Thread Thierry Carrez
** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1313746

Title:
  Non-admins can create public images

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  Glance documentation (
  http://docs.openstack.org/developer/glance/glanceapi.html ) states:

  > Note Use of the is_public parameter is restricted to admin users.
  For all other users it will be ignored.

  However, this is not true on havana, ie. with horizon:

  - user a uploads an image with is_public checkbox **checked**,
  - user b logs in and can see that image in /project/images_and_snapshots/

  It is reproducible with the command line of course:

  vagrant@precise64:/opt/stack/horizon$ glance --os-username aa --os-password 
aa --os-tenant-name aa --os-auth-url http://127.0.0.1:5000/v2.0 image-create 
--is-public True --name hacked --disk-format qcow2 --container-format bare 
--file cirros-0.3.2-x86_64-disk.img   
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | 64d7c1cd2b6f60c92c14662941cb7913 |
  | container_format | bare |
  | created_at   | 2014-04-28T14:10:07  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | qcow2|
  | id   | 8f843998-d69f-42ee-90a2-24031aa8fe5b |
  | is_public| True |
  | min_disk | 0|
  | min_ram  | 0|
  | name | hacked   |
  | owner| c8df7a80acd44967a757ad1e346f3340 |
  | protected| False|
  | size | 13167616 |
  | status   | active   |
  | updated_at   | 2014-04-28T14:10:07  |
  +--+--+
  vagrant@precise64:/opt/stack/horizon$ glance --os-username bb --os-password 
bb --os-tenant-name bb --os-auth-url http://127.0.0.1:5000/v2.0 image-list
  
+--+-+-+--+--++
  | ID   | Name| 
Disk Format | Container Format | Size | Status |
  
+--+-+-+--+--++
  | d6b482f7-7922-46f2-b501-11d18fb20f41 | cirros-0.3.1-x86_64-uec | 
ami | ami  | 25165824 | active |
  | 5579dc39-06ba-4fa8-a9d9-b26d66e8a0b0 | cirros-0.3.1-x86_64-uec-kernel  | 
aki | aki  | 4955792  | active |
  | bdfc240a-2c6b-4511-bf72-0b5a9453a24a | cirros-0.3.1-x86_64-uec-ramdisk | 
ari | ari  | 3714968  | active |
  | 8f843998-d69f-42ee-90a2-24031aa8fe5b | hacked  | 
qcow2   | bare | 13167616 | active |
  
+--+-+-+--+--++

  Potentially, a malicious user could upload an image with a backdoor
  and make it available to the public.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1313746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332536] [NEW] ImageBusy: error removing image. when evacuate on ceph backed volume

2014-06-20 Thread WingWu
Public bug reported:

 icehouse
Ceph as a backend for glance and cinder

 when evacuate  an instance  from failed host to another. the command
fails.

2014-06-20 20:21:36.430 12362 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: error removing image
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher payload)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 327, in 
decorated_function
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 303, in 
decorated_function
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 290, in 
decorated_function
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2251, in 
terminate_instance
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
do_terminate_instance(instance, bdms)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
249, in inner
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2249, in 
do_terminate_instance
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
self._set_instance_error_state(context, instance['uuid'])
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2239, in 
do_terminate_instance
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
reservations=reservations)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/hooks.py", line 103, in inner
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher rv = 
f(*args, **kwargs)
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2209, in 
_delete_instance
2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
user_id=user_id)
2014-06-20 20:

[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-06-20 Thread Roman Podoliaka
** Also affects: mos
   Importance: Undecided
   Status: New

** Tags added: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Triaged
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332542] [NEW] boto connection request timeout in InstanceRunTest.test_compute_with_volumes

2014-06-20 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/42/100342/6/check/check-tempest-dsvm-
full/b115354/console.html

2014-06-18 14:45:55.825 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes[gate,smoke]
2014-06-18 14:45:55.825 | 
---
2014-06-18 14:45:55.825 | 
2014-06-18 14:45:55.825 | Captured traceback:
2014-06-18 14:45:55.825 | ~~~
2014-06-18 14:45:55.825 | Traceback (most recent call last):
2014-06-18 14:45:55.825 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 264, in 
test_compute_with_volumes
2014-06-18 14:45:55.825 | address = self.ec2_client.allocate_address()
2014-06-18 14:45:55.825 |   File "tempest/services/botoclients.py", line 
79, in func
2014-06-18 14:45:55.826 | return getattr(conn, name)(*args, **kwargs)
2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 1794, in 
allocate_address
2014-06-18 14:45:55.826 | return self.get_object('AllocateAddress', 
params, Address, verb='POST')
2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1163, in 
get_object
2014-06-18 14:45:55.826 | response = self.make_request(action, params, 
path, verb)
2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1089, in 
make_request
2014-06-18 14:45:55.826 | return self._mexe(http_request)
2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 923, in _mexe
2014-06-18 14:45:55.826 | response = connection.getresponse()
2014-06-18 14:45:55.826 |   File "/usr/lib/python2.7/httplib.py", line 
1030, in getresponse
2014-06-18 14:45:55.826 | response.begin()
2014-06-18 14:45:55.827 |   File "/usr/lib/python2.7/httplib.py", line 407, 
in begin
2014-06-18 14:45:55.827 | version, status, reason = self._read_status()
2014-06-18 14:45:55.827 |   File "/usr/lib/python2.7/httplib.py", line 365, 
in _read_status
2014-06-18 14:45:55.827 | line = self.fp.readline()
2014-06-18 14:45:55.827 |   File "/usr/lib/python2.7/socket.py", line 430, 
in readline
2014-06-18 14:45:55.827 | data = recv(1)
2014-06-18 14:45:55.827 | timeout: timed out
2014-06-18 14:45:55.827 | 
2014-06-18 14:45:55.827 | 
2014-06-18 14:45:55.827 | Captured pythonlogging:
2014-06-18 14:45:55.828 | ~~~
2014-06-18 14:45:55.828 | 2014-06-18 14:31:17,639 Instance booted - state: 
pending
2014-06-18 14:45:55.828 | 2014-06-18 14:31:18,999 Volume created - status: 
creating
2014-06-18 14:45:55.828 | 2014-06-18 14:31:30,154 State transition 
"pending" ==> "running" 11 second
2014-06-18 14:45:55.828 | 2014-06-18 14:31:30,154 Instance now running - 
state: running

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ec2 testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332542

Title:
  boto connection request timeout in
  InstanceRunTest.test_compute_with_volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/42/100342/6/check/check-tempest-dsvm-
  full/b115354/console.html

  2014-06-18 14:45:55.825 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes[gate,smoke]
  2014-06-18 14:45:55.825 | 
---
  2014-06-18 14:45:55.825 | 
  2014-06-18 14:45:55.825 | Captured traceback:
  2014-06-18 14:45:55.825 | ~~~
  2014-06-18 14:45:55.825 | Traceback (most recent call last):
  2014-06-18 14:45:55.825 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 264, in 
test_compute_with_volumes
  2014-06-18 14:45:55.825 | address = self.ec2_client.allocate_address()
  2014-06-18 14:45:55.825 |   File "tempest/services/botoclients.py", line 
79, in func
  2014-06-18 14:45:55.826 | return getattr(conn, name)(*args, **kwargs)
  2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 1794, in 
allocate_address
  2014-06-18 14:45:55.826 | return self.get_object('AllocateAddress', 
params, Address, verb='POST')
  2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1163, in 
get_object
  2014-06-18 14:45:55.826 | response = self.make_request(action, 
params, path, verb)
  2014-06-18 14:45:55.826 |   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1089, in 
make_request
  2014-06-18 14:45:55.826 | return self._mexe(http_request)
  2014-06-18 14:4

[Yahoo-eng-team] [Bug 1332558] [NEW] Instance snapshot is created with wrong image format

2014-06-20 Thread Ankit Agrawal
Public bug reported:

When you boot an instance with image of RAW image format then after
taking instance snapshot it creates a snapshot with image format QCOW2.

Steps to reproduce using Horizon:

1. Go to Project --> Compute --> Images and click on 'Create Image'.
2. Create an image with 'Raw' format.
3. Go to Project --> Compute --> Instances and click on 'Launch Instance'.
4. Boot an instance by selecting the 'Boot from image' as source and newly 
created raw image from the Image Name drop-down.
5. Click on 'Create Snapshot' to create a snapshot of this instance.
6. You will be redirected to image list page where you will see the format of 
snapshot as 'QCOW2'.

Ideally the snapshot format should be same as its source image.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332558

Title:
  Instance  snapshot is created with wrong image format

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you boot an instance with image of RAW image format then after
  taking instance snapshot it creates a snapshot with image format
  QCOW2.

  Steps to reproduce using Horizon:

  1. Go to Project --> Compute --> Images and click on 'Create Image'.
  2. Create an image with 'Raw' format.
  3. Go to Project --> Compute --> Instances and click on 'Launch Instance'.
  4. Boot an instance by selecting the 'Boot from image' as source and newly 
created raw image from the Image Name drop-down.
  5. Click on 'Create Snapshot' to create a snapshot of this instance.
  6. You will be redirected to image list page where you will see the format of 
snapshot as 'QCOW2'.

  Ideally the snapshot format should be same as its source image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1332558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332475] Re: neutron should give an error if we give Segmentation_id beyond specified range

2014-06-20 Thread Eugene Nikanorov
I think that was done specifically to allow admin to allocate networks
with specific segmentation_ids while regular ranges work with those IDs
that are allocated.

I believe that is intentional and should be restricted.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332475

Title:
  neutron should give an error if we give Segmentation_id beyond
  specified range

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  
  Problem in segment-Id range, it should belongs to given range,
  command to reproduce :

  neutron net-create demo_net --provider:network_type gre
  --provider:Segmentation_id 2000

  right now its allowing to create.

  expected:
  neutron should give an error if we give Segmentation_id beyond specified 
range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332560] [NEW] Invalid image is created without any format when instance snapshot is created.

2014-06-20 Thread Ankit Agrawal
Public bug reported:

When an instance is created using boot from volume, boot from
image(creates a new volume) or boot from volume snapshot(creates a new
volume), an image of 0 bytes is created which is neither public nor
protected and have no format defined.

Steps to reproduce using Horizon:

Case 1: Boot instance from volume

1. Go to Project --> Compute --> Volumes and click on 'Create Volume'.
2. Provide volume name, Size and select 'Image' from 'Volume Source' drop-down 
and an image from 'Use image as a source' drop-down.
3. Click on 'Create Volume' button and a bootable volume will be created.
4. Go to Project --> Compute --> Instances and click on 'Launch Instance' 
button from the top right.
5. Provide instance name, select 'Boot from volume' as 'Instance Boot Source' 
and a bootable volume as 'volume'.
6. Click on 'Launch' button.
7. You will be redirected to instance list page.
8. Click on 'Create Snapshot' to create a snapshot of this instance.
9. In this case a new image of 0 bytes and a volume snapshot will be created.
10. You will be redirected to image list where you will see a new image created 
with 0 bytes.
11. Neither this image is public nor protected.
12. No format is defined for this image.

Case 2: Boot instance from image(creates a new volume)

1. Go to Project --> Compute --> Instances and click on 'Launch Instance' 
button from the top right.
2. Provide instance name, select 'Boot from image(creates a new volume)' as 
'Instance Boot Source' and an image from the drop-down.
3. Click on 'Launch' button.
4. A new volume will be created and will be attached to this instance.
5. You will be redirected to instance list page.
6. Click on 'Create Snapshot' to create a snapshot of this instance.
7. In this case a new image of 0 bytes and a volume snapshot will be created.
8. You will be redirected to image list where you will see a new image created 
with 0 bytes.
9. Neither this image is public nor protected.
10. No format is defined for this image.

Case 3: Boot instance from volume snapshot(creates a new volume)

1. Go to Project --> Compute --> volumes and click on 'Create Volume' button.
2. Create a bootable volume same as case 1.
3. From volumes list page you will see 'more' button with respect to each 
volume.
4. Click on 'more', a drop-down will open, select 'Create Snapshot' from this 
drop-down.
5. A snapshot will be created for this volume.
6. Go to Project --> Compute --> Instances and click on 'Launch Instance' 
button from the top right.
7. Provide instance name, select 'Boot from image(creates a new volume)' as 
'Instance Boot Source' and an image from the drop-down.
8. Click on 'Launch' button.
9. A new volume will be created and will be attached to this instance.
10. You will be redirected to instance list page.
11. Click on 'Create Snapshot' to create a snapshot of this instance.
12. In this case a new image of 0 bytes and a volume snapshot will be created.
13. You will be redirected to image list where you will see a new image created 
with 0 bytes.
14. Neither this image is public nor protected.
15. No format is defined for this image.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ntt

** Description changed:

- When an instance is created using boot from volume, boot from image(creates a 
new volume) or
- boot from volume snapshot(creates a new volume), an image of 0 bytes is 
created which is
- neither public nor protected and have no format defined.
+ When an instance is created using boot from volume, boot from
+ image(creates a new volume) or boot from volume snapshot(creates a new
+ volume), an image of 0 bytes is created which is neither public nor
+ protected and have no format defined.
  
  Steps to reproduce using Horizon:
  
  Case 1: Boot instance from volume
  
  1. Go to Project --> Compute --> Volumes and click on 'Create Volume'.
  2. Provide volume name, Size and select 'Image' from 'Volume Source' 
drop-down and an image from 'Use image as a source' drop-down.
  3. Click on 'Create Volume' button and a bootable volume will be created.
  4. Go to Project --> Compute --> Instances and click on 'Launch Instance' 
button from the top right.
  5. Provide instance name, select 'Boot from volume' as 'Instance Boot Source' 
and a bootable volume as 'volume'.
  6. Click on 'Launch' button.
  7. You will be redirected to instance list page.
  8. Click on 'Create Snapshot' to create a snapshot of this instance.
  9. In this case a new image of 0 bytes and a volume snapshot will be created.
  10. You will be redirected to image list where you will see a new image 
created with 0 bytes.
  11. Neither this image is public nor protected.
  12. No format is defined for this image.
  
  Case 2: Boot instance from image(creates a new volume)
  
  1. Go to Project --> Compute --> Instances and click on 'Launch Instance' 
button from the top right.
  2. Provide instance name, select 'Boot from image(creates a new volume)' as 
'Instance Bo

[Yahoo-eng-team] [Bug 1331810] Re: R.106-ubuntu-havana- 47-VN creation failed

2014-06-20 Thread Eugene Nikanorov
neutron received incorrect parameters for create network request

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1331810

Title:
  R.106-ubuntu-havana- 47-VN creation failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Logs copied to : /cs-shared/shaju/bugs/ubuntu-vn-r106/log

  2014-06-18 13:34:30.869ERROR [neutron.api.v2.resource] create failed
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 
84, in resource
  result = method(request=request, **args)
File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 405, 
in create
  obj = obj_creator(request.context, **kwargs)
File 
"/usr/lib/python2.7/dist-packages/neutron_plugin_contrail/plugins/opencontrail/contrailplugin.py",
 line 259, in create_network
  raise e
  NoIdError: Unknown id: Error: oper 2 url 
/project/ec565202-6b2a-4306-9b15-3057af32a241 body {'exclude_back_refs': True, 
'exclude_children': True} response No project object found for id 
ec565202-6b2a-4306-9b15-3057af32a241
  ~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1331810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332564] [NEW] migrate_to_ml2.py fails. cannot find table networks

2014-06-20 Thread Atze de Vries
Public bug reported:

during upgrade from havana to icehouse the database migration script 
(migrate_to_ml2.py) fails. I run it with the following options:
python -m neutron.db.migration.migrate_to_ml2 --tunnel-type gre --release 
icehouse openvswitch mysql://neutron:X@127.0.0.1/neutron

It returns the following trace:
Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 
462, in 
main()
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 
458, in main
args.vxlan_udp_port)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 
138, in __call__
metadata.create_all(engine)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 2848, in 
create_all
tables=tables)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1479, 
in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1122, 
in _run_visitor
**kwargs).traverse_single(element)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 122, 
in traverse_single
return meth(obj, **kw)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py", line 56, in 
visit_metadata
collection = [t for t in sql_util.sort_tables(tables)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/util.py", line 39, in 
sort_tables
{'foreign_key': visit_foreign_key})
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 258, 
in traverse
return traverse_using(iterate(obj, opts), obj, visitors)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 249, 
in traverse_using
meth(target)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/util.py", line 30, in 
visit_foreign_key
parent_table = fkey.column.table
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
612, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 1474, in 
column
tname)
sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 
'ml2_network_segments.network_id' could not find table 'networks' with which to 
generate a foreign key to target column 'id'

If i create this table from the mysql console with this:

CREATE TABLE ml2_network_segments ( id VARCHAR(36) NOT NULL, network_id 
VARCHAR(36) NOT NULL, network_type VARCHAR(32) NOT NULL , physical_network 
VARCHAR(64), segmentation_id INT);
ALTER TABLE ml2_network_segmentsADD FOREIGN KEY (network_id) REFERENCES 
networks (id) ON DELETE cascade;
ALTER TABLE ml2_network_segments ADD PRIMARY KEY (id);

No error is given.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332564

Title:
  migrate_to_ml2.py fails. cannot find table networks

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  during upgrade from havana to icehouse the database migration script 
(migrate_to_ml2.py) fails. I run it with the following options:
  python -m neutron.db.migration.migrate_to_ml2 --tunnel-type gre --release 
icehouse openvswitch mysql://neutron:X@127.0.0.1/neutron

  It returns the following trace:
  Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
  "__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
  exec code in run_globals
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 
462, in 
  main()
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 
458, in main
  args.vxlan_udp_port)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 
138, in __call__
  metadata.create_all(engine)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 2848, in 
create_all
  tables=tables)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1479, in _run_visitor
  conn._run_visitor(visitorcallable, element, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1122, in _run_visitor
  **kwargs).traverse_single(element)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 
122, in traverse_single
  return meth(obj, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py", line 56, 
in visit_metadata
  collecti

[Yahoo-eng-team] [Bug 1332571] [NEW] Multiple constants define linux interface maximum length

2014-06-20 Thread Cedric Brandily
Public bug reported:

Multiple constants define linux interface maximum length (15):
* neutron.agent.linux.utils: DEVICE_NAME_LEN in get_interface_mac (=15)
* neutron.agent.linux.ip_lib: VETH_MAX_NAME_LENGTH (=15)
* neutron.plugins.common.constants: MAX_DEV_NAME_LEN (=16 incorrect value)

They should be replaced by a unique constant equals to 15 to ensure
consistency.

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: In Progress

** Description changed:

- Multiple constants define linux interface maximum length:
- * neutron.agent.linux.utils: DEVICE_NAME_LEN in get_interface_mac
+ Multiple constants define linux interface maximum length (15):
+ * neutron.agent.linux.utils: DEVICE_NAME_LEN in get_interface_mac (=15)
  * neutron.agent.linux.ip_lib: VETH_MAX_NAME_LENGTH (=15)
- * neutron.plugins.common.constants: MAX_DEV_NAME_LEN (=16 ... incorrect 
value, it should be 15)
+ * neutron.plugins.common.constants: MAX_DEV_NAME_LEN (=16 incorrect value)
  
- 
- We should replace them by a unique constant.
+ They should be replaced by a unique constant equals to 15.

** Description changed:

  Multiple constants define linux interface maximum length (15):
  * neutron.agent.linux.utils: DEVICE_NAME_LEN in get_interface_mac (=15)
  * neutron.agent.linux.ip_lib: VETH_MAX_NAME_LENGTH (=15)
  * neutron.plugins.common.constants: MAX_DEV_NAME_LEN (=16 incorrect value)
  
- They should be replaced by a unique constant equals to 15.
+ They should be replaced by a unique constant equals to 15 to ensure
+ consistency.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332571

Title:
  Multiple constants define linux interface maximum length

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Multiple constants define linux interface maximum length (15):
  * neutron.agent.linux.utils: DEVICE_NAME_LEN in get_interface_mac (=15)
  * neutron.agent.linux.ip_lib: VETH_MAX_NAME_LENGTH (=15)
  * neutron.plugins.common.constants: MAX_DEV_NAME_LEN (=16 incorrect value)

  They should be replaced by a unique constant equals to 15 to ensure
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332577] [NEW] Potential db lock timeout in delete_port()

2014-06-20 Thread Ihar Hrachyshka
Public bug reported:

delete_floatingip() calls to delete_port() under transaction.
delete_port() sends notifications, which may result in yield to another
green thread and potential db lock timeout.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332577

Title:
  Potential db lock timeout in delete_port()

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  delete_floatingip() calls to delete_port() under transaction.
  delete_port() sends notifications, which may result in yield to
  another green thread and potential db lock timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330364] Re: Duplicate Qpid connection is created during Connection initialization

2014-06-20 Thread Mark McLoughlin
Fixed in oslo.messaging by I7618cf3506d857579dc37b338690d05179ba272d

oslo-incubator patch https://review.openstack.org/#/c/100177/

Note - this isn't nearly as bad as the report makes it sound. We simple
creating duplicate connection objects, but don't actually set up and
tear down extra sockets with the broker

** Also affects: oslo
   Importance: Undecided
   Status: New

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging
   Status: New => Fix Released

** Changed in: oslo.messaging
   Importance: Undecided => Low

** Changed in: oslo.messaging
 Assignee: (unassigned) => ChangBo Guo(gcb) (glongwave)

** Changed in: oslo
   Importance: Undecided => Low

** Changed in: oslo
   Status: New => Triaged

** Changed in: oslo
   Status: Triaged => In Progress

** Changed in: oslo
 Assignee: (unassigned) => zhu zhu  (zhuzhubj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330364

Title:
  Duplicate Qpid connection is created during Connection initialization

Status in OpenStack Neutron (virtual network service):
  New
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  neutron was still adopting the oslo incubator code for rpc modules.
  During the qpid connection setup from amqp get_connection_pool.

  Duplicate connections will be created during __init__(class Connection
  in impl_qpid.py). And after the first connection object  is created,
  this connection will never be used, and within the next step method
  reconnect, it will create a new qpid connection object and open it.

  Need to fix this issue for duplicate creation of qpid connection.

  Impl_qpid.py
  class Connection(object):
  """Connection object."""

  pool = None

  def __init__(self, conf, server_params=None):
  if not qpid_messaging:
  raise ImportError("Failed to import qpid.messaging")

  self.session = None
  self.consumers = {}
  self.consumer_thread = None
  self.proxy_callbacks = []
  self.conf = conf

  if server_params and 'hostname' in server_params:
  # NOTE(russellb) This enables support for cast_to_server.
  server_params['qpid_hosts'] = [
  '%s:%d' % (server_params['hostname'],
 server_params.get('port', 5672))
  ]

  params = {
  'qpid_hosts': self.conf.qpid_hosts,
  'username': self.conf.qpid_username,
  'password': self.conf.qpid_password,
  }
  params.update(server_params or {})

  self.brokers = params['qpid_hosts']
  self.username = params['username']
  self.password = params['password']

  brokers_count = len(self.brokers)
  self.next_broker_indices = itertools.cycle(range(brokers_count))

  self.connection_create(self.brokers[0])
  self.reconnect()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332597] [NEW] multiple floating ip port delete

2014-06-20 Thread Kevin Fox
Public bug reported:

RDO icehouse on rhel6.

I have an instance that has multiple floating ip's assigned to it. when
I went to delete it, the delete fails. digging into the logs, I get:

2014-06-20 08:40:58.685 15013 ERROR neutron.api.v2.resource 
[req-26656337-033e-49bd-be64-ded7fd525544 None] delete failed
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 87, in 
resource
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 449, in delete
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/ml2/plugin.py", line 739, in 
delete_port
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource 
l3plugin.disassociate_floatingips(context, id)
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/l3_db.py", line 764, in 
disassociate_floatingips
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource % port_id)
2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource Exception: Multiple 
floating IPs found for port 4eb94a31-f622-4965-ad4c-08805bc76f98

If I pull off one of the floating ip's, the delete continues ok. The
code needs to be updated to support deleting the neutron port when
multiple floating ip's are assigned.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332597

Title:
  multiple floating ip port delete

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  RDO icehouse on rhel6.

  I have an instance that has multiple floating ip's assigned to it.
  when I went to delete it, the delete fails. digging into the logs, I
  get:

  2014-06-20 08:40:58.685 15013 ERROR neutron.api.v2.resource 
[req-26656337-033e-49bd-be64-ded7fd525544 None] delete failed
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 449, in delete
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/ml2/plugin.py", line 739, in 
delete_port
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource 
l3plugin.disassociate_floatingips(context, id)
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/db/l3_db.py", line 764, in 
disassociate_floatingips
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource % port_id)
  2014-06-20 08:40:58.685 15013 TRACE neutron.api.v2.resource Exception: 
Multiple floating IPs found for port 4eb94a31-f622-4965-ad4c-08805bc76f98

  If I pull off one of the floating ip's, the delete continues ok. The
  code needs to be updated to support deleting the neutron port when
  multiple floating ip's are assigned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332601] [NEW] Refactor "Authenticates and generates a token" docs for Keystone v3

2014-06-20 Thread Chris Johnson
Public bug reported:

The external docs for the "Authenticates and generates a token" API call
in Keystone v3 are a mess, specifically related to how they lay out the
various requests and their associated responses. There are many ways
that a token can be generated (8 as far as the existing docs reflect),
and there is no indication given to the application developer that if
they submit a token request with a request body of X, they will receive
a response that looks like Y. This seems to be an obvious way to lay out
the possible request/response options.

** Affects: keystone
 Importance: Undecided
 Assignee: Chris Johnson (wchrisjohnson)
 Status: New


** Tags: documentation

** Changed in: keystone
 Assignee: (unassigned) => Chris Johnson (wchrisjohnson)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1332601

Title:
  Refactor "Authenticates and generates a token" docs for Keystone v3

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The external docs for the "Authenticates and generates a token" API
  call in Keystone v3 are a mess, specifically related to how they lay
  out the various requests and their associated responses. There are
  many ways that a token can be generated (8 as far as the existing docs
  reflect), and there is no indication given to the application
  developer that if they submit a token request with a request body of
  X, they will receive a response that looks like Y. This seems to be an
  obvious way to lay out the possible request/response options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1332601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332666] [NEW] Kesytone token poor performance. Need index on user_id

2014-06-20 Thread Haneef Ali
Public bug reported:

Keystone middleware calls  GET /v2.0/revoked every 10 sec which
generates a query simar to

SELECT token.id AS token_id, token.expires AS token_expires, token.extra
AS token_extra, token.valid AS token_valid, token.user_id AS
token_user_id, token.trust_id AS token_trust_id  FROM token WHERE
token.valid = 1 AND token.expires > '2014-06-19 23:18:48.196884' AND
token.user_id = 'f6d9db238d084998aaef92ce425edff0';

This query most of the time uses the index  "idx_token_expires" which
results in too many rows.Some times  depending on the load  using
this  index matches more than 5 rows in our performance run  which
is as good as  full table scan.

As all the quries use "user_id"  in where clause, the above query can be
optimzed by adding index on user_id.  The same performance run  after
adding the index on  user_id doesn't show any degradation.

Can you please consider adding this in upstream?

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1332666

Title:
  Kesytone token poor performance. Need index on user_id

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Keystone middleware calls  GET /v2.0/revoked every 10 sec which
  generates a query simar to

  SELECT token.id AS token_id, token.expires AS token_expires,
  token.extra AS token_extra, token.valid AS token_valid, token.user_id
  AS token_user_id, token.trust_id AS token_trust_id  FROM token WHERE
  token.valid = 1 AND token.expires > '2014-06-19 23:18:48.196884' AND
  token.user_id = 'f6d9db238d084998aaef92ce425edff0';

  This query most of the time uses the index  "idx_token_expires" which
  results in too many rows.Some times  depending on the load  using
  this  index matches more than 5 rows in our performance run  which
  is as good as  full table scan.

  As all the quries use "user_id"  in where clause, the above query can
  be optimzed by adding index on user_id.  The same performance run
  after adding the index on  user_id doesn't show any degradation.

  Can you please consider adding this in upstream?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1332666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD is being used

2014-06-20 Thread Andrew Woodward
** Also affects: nova
   Importance: Undecided
   Status: New

** Summary changed:

- Update statistics from computes if RBD is being used
+ Update statistics from computes if RBD ephmeral is used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephmeral is used

Status in Fuel: OpenStack installer that works:
  Triaged
Status in OpenStack Compute (Nova):
  New

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332347] Re: Add Nuage plugin to the list of core plugins in neutrons setup.cfg

2014-06-20 Thread Sayaji Patil
Sorry for the confusion, the plugin is already there in setup.cfg

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332347

Title:
  Add Nuage plugin to the list of core plugins in neutrons setup.cfg

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Add nuage plugin to the list of core plugins in neutrons setup.cfg

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332688] [NEW] plugin panels cannot add exceptions

2014-06-20 Thread Rob Raymond
Public bug reported:

Currently only dashboard plugins can register what exceptions are treated as 
unknown, recoverable or unauthorized by horizon's handle exceptions.
It seems that panels may also need the ability to register what exceptions need 
to be handled.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Raymond (rob-raymond)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Raymond (rob-raymond)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332688

Title:
  plugin panels cannot add exceptions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently only dashboard plugins can register what exceptions are treated as 
unknown, recoverable or unauthorized by horizon's handle exceptions.
  It seems that panels may also need the ability to register what exceptions 
need to be handled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332696] [NEW] Various neutron tests fail with SSH Timeout EOFError

2014-06-20 Thread Terry Wilson
Public bug reported:

check-tempest-dsvm-neutron-2 failing with:

2014-06-19 22:16:03.198 | 2014-06-19 22:07:37,376 Failed to establish 
authenticated ssh connection to cirros@172.24.4.71 ([Errno 111] Connection 
refused). Number attempts: 4. Retry after 5 seconds.
2014-06-19 22:16:03.198 | 2014-06-19 22:07:42,999 Public network 
connectivity check failed
...
2014-06-19 22:16:03.202 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager t.start_client()
2014-06-19 22:16:03.203 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 346, in 
start_client
2014-06-19 22:16:03.203 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager raise e
2014-06-19 22:16:03.203 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager EOFError

http://logs.openstack.org/39/101039/3/check/check-tempest-dsvm-
neutron-2/c035243/console.html

http://paste.openstack.org/show/84598/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332696

Title:
  Various neutron tests fail with SSH Timeout EOFError

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  check-tempest-dsvm-neutron-2 failing with:

  2014-06-19 22:16:03.198 | 2014-06-19 22:07:37,376 Failed to establish 
authenticated ssh connection to cirros@172.24.4.71 ([Errno 111] Connection 
refused). Number attempts: 4. Retry after 5 seconds.
  2014-06-19 22:16:03.198 | 2014-06-19 22:07:42,999 Public network 
connectivity check failed
  ...
  2014-06-19 22:16:03.202 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager t.start_client()
  2014-06-19 22:16:03.203 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 346, in 
start_client
  2014-06-19 22:16:03.203 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager raise e
  2014-06-19 22:16:03.203 | 2014-06-19 22:07:42.999 19995 TRACE 
tempest.scenario.manager EOFError

  http://logs.openstack.org/39/101039/3/check/check-tempest-dsvm-
  neutron-2/c035243/console.html

  http://paste.openstack.org/show/84598/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332707] [NEW] RadioSelect is not displyed properly in modal or workflow

2014-06-20 Thread Ying Zuo
Public bug reported:

When RadioSelect is used as a widget of a form field within a modal or
workflow, the layout is off (see attached screenshot). It's caused by
the class form-field added in horizon/common/_form_fields.html sets the
full width for the input field.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2014-06-20 at 1.48.24 PM.png"
   
https://bugs.launchpad.net/bugs/1332707/+attachment/4135883/+files/Screen%20Shot%202014-06-20%20at%201.48.24%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332707

Title:
  RadioSelect is not displyed properly in modal or workflow

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When RadioSelect is used as a widget of a form field within a modal or
  workflow, the layout is off (see attached screenshot). It's caused by
  the class form-field added in horizon/common/_form_fields.html sets
  the full width for the input field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332713] [NEW] Cisco: Send network and subnet UUID during subnet create

2014-06-20 Thread Marga
Public bug reported:

n1kv client is not sending netSegmentName and id fields to the VSM
(controller) in create_ip_pool

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: cisco low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332713

Title:
  Cisco: Send network and subnet UUID during subnet create

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  n1kv client is not sending netSegmentName and id fields to the VSM
  (controller) in create_ip_pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306699] Re: utils.find_resource return resource not depends on query

2014-06-20 Thread Dean Troyer
** Changed in: python-openstackclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1306699

Title:
  utils.find_resource return resource not depends on query

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Command Line Client:
  Fix Released

Bug description:
  When I have one group, the query /groups?display_name=bogus returns:

  {u'groups': [{u'id': u'6ce42989b4ae41f89323813812ca6208', u'name':
  u'asdf', u'domain_id': u'default', u'links': {u'self':
  u'http://172.20.1.112:5000/v3/groups/6ce42989b4ae41f89323813812ca6208'},
  u'description': u''}], u'links': {u'self':
  u'http://172.20.1.112:5000/v3/groups', u'next': None, u'previous':
  None}}

  Even though the query did not match the query string.

  I have defined only one resource of keystone (one user and one group), then I 
try command which call method utils.find_resource.
  This resource would be returned by utils.find_resource not depends on what 
was specified as name or id.

  Examples:
  (.venv)stack@eu:/opt/stack/python-openstackclient$ openstack user list --role 
--os-identity-api-version 3 non_existing_user

  | 54fbed994dc84616b2118e4fe6b77d8f | Member |

  (.venv)stack@eu:/opt/stack/python-openstackclient$ openstack user list
  --role --os-identity-api-version 3  admin

  | 54fbed994dc84616b2118e4fe6b77d8f | Member |

  openstack group list --role --os-identity-api-version 3 --domain admin
  group_not_exist

  | 54fbed994dc84616b2118e4fe6b77d8f | Member | Default | t_dr  |

  So, utils.find_resource tries to find user/group with incorrect names
  but  doesn't fail in case when only one resource of such type is
  specified. But it should raise an exception that it can't find
  resource with specified name or ID.

  I tried this also with nova and cinder commands, it works correct with
  this services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1306699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332718] [NEW] brocade ml2 mechanism does not support VDX/NOS version greater than 4.1.0

2014-06-20 Thread Shiv Haris
Public bug reported:

In order to support VDX/NOS version greater than 4.1.0, one NETCONF
template needs to be changed. However it is necessary to support all
versions of NOS, hence a run time check of the NOS version should be
made and appropriate template should be used.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332718

Title:
  brocade ml2 mechanism does not support VDX/NOS version greater than
  4.1.0

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In order to support VDX/NOS version greater than 4.1.0, one NETCONF
  template needs to be changed. However it is necessary to support all
  versions of NOS, hence a run time check of the NOS version should be
  made and appropriate template should be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332719] [NEW] brocade ml2 mechanism does not support VDX/NOS version greater than 4.1.0

2014-06-20 Thread Shiv Haris
Public bug reported:

In order to support VDX/NOS version greater than 4.1.0, one NETCONF
template needs to be changed. However it is necessary to support all
versions of NOS, hence a run time check of the NOS version should be
made and appropriate template should be used.

** Affects: neutron
 Importance: High
 Assignee: Shiv Haris (shh)
 Status: Confirmed


** Tags: brocade icehouse-backport-potential

** Changed in: neutron
 Assignee: (unassigned) => Shiv Haris (shh)

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
Milestone: None => juno-2

** Changed in: neutron
   Importance: Undecided => High

** Tags added: brocade icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332719

Title:
  brocade ml2 mechanism does not support VDX/NOS version greater than
  4.1.0

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  In order to support VDX/NOS version greater than 4.1.0, one NETCONF
  template needs to be changed. However it is necessary to support all
  versions of NOS, hence a run time check of the NOS version should be
  made and appropriate template should be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332726] [NEW] Multi-region management from Horizon requires endless authentications

2014-06-20 Thread Sukhdev Kapur
Public bug reported:

I am deploying Horizon to manage multiple regions by updating
AVAILABLE_REGIONS in
/opt/stack/horizon/openstack_dashboard/local/local_settings.py.

I notice that it asks for authentication for each region to login -
which is OK. However, once authenticated for all regions, when I try to
switch to an already authenticated region, it asks for authentication
regardless. This makes this solution very annoying.

Once authenticated for all regions, it should not require to keep
authenticating.

Is there a work around for this?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332726

Title:
  Multi-region management from Horizon requires endless authentications

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I am deploying Horizon to manage multiple regions by updating
  AVAILABLE_REGIONS in
  /opt/stack/horizon/openstack_dashboard/local/local_settings.py.

  I notice that it asks for authentication for each region to login -
  which is OK. However, once authenticated for all regions, when I try
  to switch to an already authenticated region, it asks for
  authentication regardless. This makes this solution very annoying.

  Once authenticated for all regions, it should not require to keep
  authenticating.

  Is there a work around for this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332724] [NEW] neutron.tests.unit.vmware.vshield.test_lbaas_plugin.TestLoadbalancerPlugin.test_create_vip_with_invalid_router often fails in gate

2014-06-20 Thread Ihar Hrachyshka
Public bug reported:

Since yesterday, this unit test often fails in gate. For example, see:
http://logs.openstack.org/60/99760/2/gate/gate-neutron-
python26/203b22d/testr_results.html.gz

It may be somehow related to recent migration to oslo.messaging that was
merged near the same time.

Trace below:

2014-06-19 00:34:05,228 INFO [neutron.manager] Loading core plugin: 
neutron.plugins.vmware.plugins.service.NsxAdvancedPlugin
2014-06-19 00:34:05,410 INFO [neutron.manager] Service L3_ROUTER_NAT is 
supported by the core plugin
2014-06-19 00:34:05,411 INFO [neutron.manager] Service FIREWALL is 
supported by the core plugin
2014-06-19 00:34:05,411 INFO [neutron.manager] Service LOADBALANCER is 
supported by the core plugin
2014-06-19 00:34:05,411 INFO [neutron.manager] Service VPN is supported by 
the core plugin
2014-06-19 00:34:05,432 INFO [neutron.common.config] Config paste file: 
/home/jenkins/workspace/gate-neutron-python26/neutron/tests/etc/api-paste.ini.test
2014-06-19 00:34:05,475  WARNING [neutron.quota] router is already registered.
2014-06-19 00:34:05,476  WARNING [neutron.quota] floatingip is already 
registered.
2014-06-19 00:34:05,476  WARNING [neutron.quota] pool is already registered.
2014-06-19 00:34:05,477  WARNING [neutron.quota] vip is already registered.
2014-06-19 00:34:05,477  WARNING [neutron.quota] member is already registered.
2014-06-19 00:34:05,477  WARNING [neutron.quota] health_monitor is already 
registered.
2014-06-19 00:34:05,900 INFO [neutron.api.v2.resource] create failed 
(client error): Bad router request: router_id is not provided!
2014-06-19 00:34:06,226ERROR [neutron.api.v2.resource] create failed
Traceback (most recent call last):
  File "neutron/api/v2/resource.py", line 87, in resource
result = method(request=request, **args)
  File "neutron/api/v2/base.py", line 382, in create
allow_bulk=self._allow_bulk)
  File "neutron/api/v2/base.py", line 651, in prepare_request_body
raise webob.exc.HTTPBadRequest(msg)
HTTPBadRequest: Invalid input for router_id. Reason: 'invalid_router_id' is not 
a valid UUID.
2014-06-19 00:34:07,921 INFO [neutron.api.v2.resource] create failed 
(client error): Bad router request: 
router_id:3a4d22b7-e08a-4d52-a872-ef73fc2ff4ce is not an advanced router!
2014-06-19 00:34:08,624 INFO [NeutronPlugin] NSX plugin does not support 
regular VIF ports on external networks. Port 
206323a1-20a1-480f-8f91-b9a6a90f8206 will be down.
2014-06-19 00:34:09,109 INFO [NeutronPlugin] NSX plugin does not support 
regular VIF ports on external networks. Port 
206323a1-20a1-480f-8f91-b9a6a90f8206 will be down.
2014-06-19 00:34:10,612 INFO [neutron.plugins.vmware.vshield.tasks.tasks] 
TaskManager terminated
}}}

Traceback (most recent call last):
  File "neutron/tests/unit/vmware/vshield/test_lbaas_plugin.py", line 208, in 
test_create_vip_with_invalid_router
self.test_create_vip, router_id=router_id)
  File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 393, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 406, in assertThat
raise mismatch_error
MismatchError: > returned None

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332724

Title:
  
neutron.tests.unit.vmware.vshield.test_lbaas_plugin.TestLoadbalancerPlugin.test_create_vip_with_invalid_router
  often fails  in gate

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Since yesterday, this unit test often fails in gate. For example, see:
  http://logs.openstack.org/60/99760/2/gate/gate-neutron-
  python26/203b22d/testr_results.html.gz

  It may be somehow related to recent migration to oslo.messaging that
  was merged near the same time.

  Trace below:

  2014-06-19 00:34:05,228 INFO [neutron.manager] Loading core plugin: 
neutron.plugins.vmware.plugins.service.NsxAdvancedPlugin
  2014-06-19 00:34:05,410 INFO [neutron.manager] Service L3_ROUTER_NAT is 
supported by the core plugin
  2014-06-19 00:34:05,411 INFO [neutron.manager] Service FIREWALL is 
supported by the core plugin
  2014-06-19 00:34:05,411 INFO [neutron.manager] Service LOADBALANCER is 
supported by the core plugin
  2014-06-19 00:34:05,411 INFO [neutron.manager] Service VPN is supported 
by the core plugin
  2014-06-19 00:34:05,432 INFO [neutron.common.config] Config paste file: 
/home/jenkins/workspace/gate-neutron-python26/neutron/tests/etc/api-paste.ini.test
  2014-06-19 00:34:05,475  WARNING [neutron.quota] router is already registered.
  2014-06-19 00:34:05,476  WARNING [neutron.quota] floatingip is already 
registered.
  2014-06-19 00:34:05,47

[Yahoo-eng-team] [Bug 1332738] [NEW] inconsistent form field help

2014-06-20 Thread Cindy Lu
Public bug reported:

Right now, we use bootstrap tooltips for some input fields.  For example
if you go to the 'Create an Image' modal and press on the 'Image
Location' text box, a black tooltip reading ' An external (HTTP) URL to
load the image from.  ' pops up.

However, the current implementation does not allow for tooltips on
dropdown menus or checkboxes.  We should have a help text solution for
all html form elements.

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332738

Title:
  inconsistent form field help

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Right now, we use bootstrap tooltips for some input fields.  For
  example if you go to the 'Create an Image' modal and press on the
  'Image Location' text box, a black tooltip reading ' An external
  (HTTP) URL to load the image from.  ' pops up.

  However, the current implementation does not allow for tooltips on
  dropdown menus or checkboxes.  We should have a help text solution for
  all html form elements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301716] Re: l3-agent and vpn-agent grab messages from the same topic

2014-06-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301716

Title:
  l3-agent and vpn-agent grab messages from the same topic

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  In my testbed running the icehouse with ZeroMQ, I find l3-agent and
  vpn-agent, running on the same physical server,  listen to the same
  topic "l3_agent" at the mean time. This is definitely an error and
  leads l3-agent malfunction.

  The simple workaround is to disable vpn-agent on the l3-agent node
  when you use ZeroMQ as the message queue.

  Another suggestion is to make vpn-agent not inherited from l3-agent
  class.

  I'm not sure about RabbitMQ or Qpid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212748] Re: log in for user with first project disabled fails

2014-06-20 Thread David Lyle
** Changed in: django-openstack-auth
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1212748

Title:
  log in for user with first project disabled fails

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  A user with roles on multiple projects may not be able to log in.  If
  the first project in the list of projects returned by keystone is
  disabled, the user will not be able to login and scope the token to a
  project that is enabled.

  In openstack_auth/backend.py line 129 should be indented one less.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1212748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331406] Re: can not login to Dashboard on devstack

2014-06-20 Thread David Lyle
** Changed in: django-openstack-auth
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331406

Title:
  can not login to Dashboard on devstack

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Using fresh master of devstack and fresh masters of all services.

  When I try to login into the Dashboard, I do not leave the login page
  (as if nothing happened, no error displayed). Strangely the screen log
  for horizon service in devstack displays

  [Wed Jun 18 10:09:46.533780 2014] [:error] [pid 24605:tid 139679844230912] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.122.162
  [Wed Jun 18 10:09:46.535449 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:Setting read timeout to None
  [Wed Jun 18 10:09:46.623021 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 1352
  [Wed Jun 18 10:09:46.633130 2014] [:error] [pid 24605:tid 139679844230912] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.122.162
  [Wed Jun 18 10:09:46.633459 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:Setting read timeout to None
  [Wed Jun 18 10:09:46.652504 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:"GET /v2.0/tenants HTTP/1.1" 200 244
  [Wed Jun 18 10:09:46.654398 2014] [:error] [pid 24605:tid 139679844230912] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.122.162
  [Wed Jun 18 10:09:46.654701 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:Setting read timeout to None
  [Wed Jun 18 10:09:46.750292 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 7457
  [Wed Jun 18 10:09:46.753146 2014] [:error] [pid 24605:tid 139679844230912] 
Login successful for user "demo".
  [Wed Jun 18 10:09:46.753354 2014] [:error] [pid 24605:tid 139679844230912] 
DeprecationWarning: check_for_test_cookie is deprecated; ensure your login view 
is CSRF-protected.
  [Wed Jun 18 10:09:46.753396 2014] [:error] [pid 24605:tid 139679844230912] 
WARNING:py.warnings:DeprecationWarning: check_for_test_cookie is deprecated; 
ensure your login view is CSRF-protected.

  
  Note the "Login successful" line. All the OS cli clients work as expected 
with the same credentials I use to login.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1331406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310427] Re: Anvil Bootstrap fails on RHEL 6.3/6.4

2014-06-20 Thread Launchpad Bug Tracker
[Expired for anvil because there has been no activity for 60 days.]

** Changed in: anvil
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1310427

Title:
  Anvil Bootstrap fails on RHEL 6.3/6.4

Status in ANVIL for forging OpenStack.:
  Expired

Bug description:
  sudo /home/vilobhmm/anvil/smithy --bootstrap -v
  Bootstrapping RHEL 6.4
  Please wait...
  --- Running bootstrap step selinux ---
  Enabling selinux for yum like binaries.
  --- Running bootstrap step epel ---
  Installing epel rpm from 
http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
  --- Running bootstrap step repos ---
  --- Running bootstrap step rpm_packages ---
  Installing system packages:
- gcc
- make
- git
- patch
- python
- python-devel
- libffi-devel
- openssl-devel
- createrepo
- yum-utils
- rpm-build
- python-pip
- python-virtualenv
- python-argparse
- python-six
  Please wait...
  Failed installing!
  Bootstrapping RHEL 6.4 failed.

  ---

  udo /home/vilobhmm/anvil/smithy --bootstrap
  Password: 
  Bootstrapping RHEL 6.3
  Please wait...
  --- Running bootstrap step selinux ---
  Enabling selinux for yum like binaries.
  --- Running bootstrap step epel ---
  Installing epel rpm from 
http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
  --- Running bootstrap step repos ---
  --- Running bootstrap step rpm_packages ---
  Installing system packages:
- gcc
- make
- git
- patch
- python
- python-devel
- libffi-devel
- openssl-devel
- createrepo
- yum-utils
- rpm-build
- python-pip
- python-virtualenv
- python-argparse
- python-six
  Please wait...
  Failed installing!
  Bootstrapping RHEL 6.3 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1310427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212358] Re: django openstack auth is granting permissions for services outside of current region

2014-06-20 Thread David Lyle
** Changed in: django-openstack-auth
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1212358

Title:
  django openstack auth is granting permissions for services outside of
  current region

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  The roles/permissions for "openstack.services.%s" type permissions are
  granted for every service available to the user.

  When a user is logged in and selects a certain region, not all
  services might be present in that region.  This leads to problems when
  accessing the various panels like compute/object store and those
  services not being in the user's current selected region.  Those
  panels look for endpoints that must match the same region as the
  user's current selection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1212358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp