[Yahoo-eng-team] [Bug 1410841] Re: Cast unnecessary notification over l3_agent with admin_state_up False

2015-01-20 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/148821

** Changed in: neutron
   Status: Opinion => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410841

Title:
  Cast unnecessary notification over l3_agent with admin_state_up False

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When users update router hosted by l3_agent, neutron server casts
  unnecessary notification even if the l3_agent is set admin_state_up
  False .

  $ neutron --os-username admin --os-password openstack 
l3-agent-list-hosting-router router1
  
+--+--++---+
  | id   | host | 
admin_state_up | alive |
  
+--+--++---+
  | 25991a37-5f6c-41bb-a80f-c2e41cdc3a0f | vagrant-ubuntu-trusty-64 | False 
 | :-)   |
  
+--+--++---+

  $ neutron router-interface-add router1 subnet
  Added interface d0985c72-08d0-4f87-b825-9dba1d257823 to router router1.

  l3_agent log:
  2015-01-14 15:18:46.937 DEBUG neutron.agent.l3.agent 
[req-1ecc6f52-49c7-4e3a-aa98-a949aefc65cd demo 
e7f0cdeb333c4b21a9acdacbe9b50a86] Got routers updated notification 
:[u'154aa9bd-4950-4ccc-a26f-c9278f9da0c4'] from (pid=11845) routers_updated 
/opt/stack/neutron/neutron/agent/l3/agent.py:977
  2015-01-14 15:18:46.937 DEBUG neutron.agent.l3.agent 
[req-1ecc6f52-49c7-4e3a-aa98-a949aefc65cd demo 
e7f0cdeb333c4b21a9acdacbe9b50a86] Starting router update for 
154aa9bd-4950-4ccc-a26f-c9278f9da0c4 from (pid=11845) _process_router_update 
/opt/stack/neutron/neutron/agent/l3/agent.py:1051
  2015-01-14 15:18:46.953 WARNING neutron.agent.l3.agent 
[req-1ecc6f52-49c7-4e3a-aa98-a949aefc65cd demo 
e7f0cdeb333c4b21a9acdacbe9b50a86] Info for router 
154aa9bd-4950-4ccc-a26f-c9278f9da0c4 were not found. Skipping router removal

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1410841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413087] [NEW] aggregate_multitenancy_isolation doesn't work with multiple tenants

2015-01-20 Thread Sam Morrison
Public bug reported:

Even though the aggregate_multitenancy_isolation says it can filter on
multiple tenants it currently doesn't.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413087

Title:
  aggregate_multitenancy_isolation doesn't work with multiple tenants

Status in OpenStack Compute (Nova):
  New

Bug description:
  Even though the aggregate_multitenancy_isolation says it can filter on
  multiple tenants it currently doesn't.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1413087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413068] [NEW] Edit current project will disable it

2015-01-20 Thread Xi Jia
Public bug reported:

If you edit current project in horizon, the current project will be
disabled.

Reproduce process:

1. Log into horizon.
2. Click project panel under identity dashboard.
3. Edit the current project, then click update
4. Meet errors, the project has been disabled after re-log in

Root Cause:

1. The "enabled" checkbox in "project info" step is disabled when user try to 
edit current project.
2. All the disabled checkbox will be regarded as False when user post the form 
data.
3. So that, it will disable the current project when try to edit it.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "When I try to edit project "public", and the current 
project is "public"."
   
https://bugs.launchpad.net/bugs/1413068/+attachment/4302734/+files/errors%20screen%20shot%201.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1413068

Title:
  Edit current project will disable it

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you edit current project in horizon, the current project will be
  disabled.

  Reproduce process:

  1. Log into horizon.
  2. Click project panel under identity dashboard.
  3. Edit the current project, then click update
  4. Meet errors, the project has been disabled after re-log in

  Root Cause:

  1. The "enabled" checkbox in "project info" step is disabled when user try to 
edit current project.
  2. All the disabled checkbox will be regarded as False when user post the 
form data.
  3. So that, it will disable the current project when try to edit it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1413068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184518] Re: trying to delete router with bound interface still result in deleted router with midonet plugin

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1184518

Title:
  trying to delete router with bound interface still result in deleted
  router with midonet plugin

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  using midonet as the quantum plugin with OS grizzly deployment.

  Trying to delete a router which has a bound interface to it results in
  horizon saying that the router can't be deleted because it has still
  an interface bound to it. However, midonet plugin goes ahead and
  deletes the router on its side. This results in inconsistency between
  both states.

  For example after when trying to access network topology, there are
  some errors in in quantum server logs

  2013-05-27 16:48:19ERROR [midonetclient.api_lib] Raising an exeption: 
(): '{"message":"The requested 
resource was not found.","code":404}'
  2013-05-27 16:48:19ERROR [quantum.api.v2.resource] index failed
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/resource.py", line 
82, in resource
  result = method(request=request, **args)
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/base.py", line 239, 
in index
  return self._items(request, True, parent_id)
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/base.py", line 192, 
in _items
  obj_list = obj_getter(request.context, **kwargs)
File "/usr/lib/python2.7/dist-packages/quantum/plugins/midonet/plugin.py", 
line 757, in get_routers
  id=qr['id'])
  MidonetResourceNotFound: MidoNet Router 085588b4-2297-4a88-936a-d5097cbd74e1 
could not be found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1184518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184497] Re: Midonet plugin deletes port even when deletion of Quantum port fails

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1184497

Title:
  Midonet plugin deletes port even when deletion of Quantum port fails

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Context: Using midonet as the plugin for quantum with a grizzly OS 
deployment, we have a VM instance with a floating ip associated through the 
horizon dashboard.
  When terminating this VM instance without first disassociation of the 
floating ip, the delete_port() in quantum API call fails because of the 
floating ip association still there.

  In this situation, the corresponding midonet port is nevertheless
  deleted by the midonet plugin although it should not. This creates
  inconsistency of state and induces errors when the dashboard tries to
  retrieve the floating IPs.

  
  2013-05-27 12:33:11DEBUG [midonetclient.api_lib] do_request: content=
  2013-05-27 12:33:11DEBUG [quantum.db.db_base_plugin_v2] Recycle 10.0.0.6
  2013-05-27 12:33:11DEBUG [quantum.db.db_base_plugin_v2] Recycle: first 
match for 10.0.0.7-10.0.0.254
  2013-05-27 12:33:11DEBUG [quantum.db.db_base_plugin_v2] Recycle: last 
match for 10.0.0.5-10.0.0.5
  2013-05-27 12:33:11DEBUG [quantum.db.db_base_plugin_v2] Recycle: merged 
10.0.0.5-10.0.0.5 and 10.0.0.7-10.0.0.254
  2013-05-27 12:33:11DEBUG [quantum.db.db_base_plugin_v2] Delete allocated 
IP 10.0.0.6 
(80689f94-a474-486a-bcf5-ef877cb6877e/38fd3e94-7a05-4453-9bf7-2c39e0b72c2c)
  2013-05-27 12:33:11ERROR [quantum.api.v2.resource] delete failed
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/resource.py", line 
82, in resource
  result = method(request=request, **args)
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/base.py", line 395, 
in delete
  obj_deleter(request.context, id, **kwargs)
File "/usr/lib/python2.7/dist-packages/quantum/plugins/midonet/plugin.py", 
line 516, in delete_port
  return super(MidonetPluginV2, self).delete_port(context, id)
File "/usr/lib/python2.7/dist-packages/quantum/db/db_base_plugin_v2.py", 
line 1359, in delete_port
  self._delete_port(context, id)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
449, in __exit__
  self.commit()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
361, in commit
  self._prepare_impl()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
340, in _prepare_impl
  self.session.flush()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
1718, in flush
  self._flush(objects)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
1789, in _flush
  flush_context.execute()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
331, in execute
  rec.execute(self)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
498, in execute
  uow
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
115, in delete_obj
  cached_connections, mapper, table, delete)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
671, in _emit_delete_statements
  connection.execute(statement, del_objects)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1449, in execute
  params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1584, in _execute_clauseelement
  compiled_sql, distilled_params
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1698, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1691, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
331, in do_execute
  cursor.execute(statement, parameters)
File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in 
execute
  self.errorhandler(self, exc, value)
File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  raise errorclass, errorvalue
  IntegrityError: (IntegrityError) (1451, 'Cannot delete or update a parent 
row: a foreign key constraint fails (`quantum`.`floatingips`, CONSTRAINT 
`floatingips_ibfk_2` FOREIGN KEY (`fixed_port_id`) REFERENCES `ports` (`id`))') 
'DELETE FROM ports WHERE ports.id = %s' ('11cfa
  808-bf86-463d-93ab-d6f16587c6a1',)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1184497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://

[Yahoo-eng-team] [Bug 1184509] Re: delete router interface fails with midonet plugin

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1184509

Title:
  delete router interface fails with midonet plugin

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  with midonet as the plugin to quantum with OS grizzly deployment,
  trying to delete an interface on a router fails for some unknown
  reason:

  traces from the quantum server log:

  2013-05-27 16:27:33ERROR [quantum.api.v2.resource] 
remove_router_interface failed
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/resource.py", line 
82, in resource
  result = method(request=request, **args)
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/base.py", line 147, 
in _handle_action
  body, **kwargs)
File "/usr/lib/python2.7/dist-packages/quantum/plugins/midonet/plugin.py", 
line 872, in remove_router_interface
  assert found
  AssertionError

  2013-05-27 16:29:36ERROR [quantum.api.v2.resource] 
remove_router_interface failed
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/resource.py", line 
82, in resource
  result = method(request=request, **args)
File "/usr/lib/python2.7/dist-packages/quantum/api/v2/base.py", line 147, 
in _handle_action
  body, **kwargs)
File "/usr/lib/python2.7/dist-packages/quantum/plugins/midonet/plugin.py", 
line 851, in remove_router_interface
  mrouter_port = self.mido_api.get_port(mbridge_port.get_peer_id())
File "/usr/lib/pymodules/python2.7/midonetclient/api.py", line 119, in 
get_port
  return self.app.get_port(id_)
File "/usr/lib/pymodules/python2.7/midonetclient/application.py", line 152, 
in get_port
  id_)
File "/usr/lib/pymodules/python2.7/midonetclient/application.py", line 206, 
in _get_port_resource_by_id
  id_)
File "/usr/lib/pymodules/python2.7/midonetclient/application.py", line 201, 
in _create_uri_from_template
  return template.replace(token, value)
  TypeError: coercing to Unicode: need string or buffer, NoneType found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1184509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265481] Re: mysql lock wait timeout on subnet_create

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265481

Title:
  mysql lock wait timeout on subnet_create

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Traceback: http://paste.openstack.org/show/59586/
  Ocurred during testing with parallelism enabled: 
http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/c95f8e0

  It's worth noticing the tests being executed here are particularly
  stressfull for neutron IPAM.

  Setting important to medium pending triage

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240916] Re: Changing the only member of a pool into admin state down should move the member and pool statuses to INACTIVE

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240916

Title:
  Changing the only member of a pool into admin state down should move
  the member and pool statuses to INACTIVE

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Version
  ===
  Havana on rhel

  
  Description
  ===
  I has a pool with one ACTIVE member, all worked well, I changed the member to 
admin_state_up=False, the traffic from the outside into the vip stopped and I 
started to get 503 (Service Unavailable) as expected, however, the cli commands 
show that the member status + the member healthcheck status + the pool status 
are all ACTIVE although in practice that's not the case.

  
  $ neutron lb-member-list 
  
+--+-+---+++
  | id   | address | protocol_port | 
admin_state_up | status |
  
+--+-+---+++
  | 0751b44f-07e2-4af5-9b2b-25bed66d239e | 10.35.211.4 |80 | False  
| ACTIVE |
  | ad9aa113-2d48-4558-9b32-ee945dc51548 | 10.35.211.2 |80 | True   
| ACTIVE |
  
+--+-+---+++

  
  $ neutron lb-member-show  0751b44f-07e2-4af5-9b2b-25bed66d239e
  ++--+
  | Field  | Value|
  ++--+
  | address| 10.35.211.4  |
  | admin_state_up | False|
  | id | 0751b44f-07e2-4af5-9b2b-25bed66d239e |
  | pool_id| 8203635c-3f93-4748-abec-4e2a9f25e45d |
  | protocol_port  | 80   |
  | status | ACTIVE   |
  | status_description |  |
  | tenant_id  | 44ef91b7c5c14d6a9222998b57355b18 |
  | weight | 1|
  ++--+

  
  $ neutron lb-pool-show 8203635c-3f93-4748-abec-4e2a9f25e45d
  
+++
  | Field  | Value  
|
  
+++
  | admin_state_up | True   
|
  | description|
|
  | health_monitors| 85d2d8ab-32d7-43b4-a17c-e4d83964e96a   
|
  | health_monitors_status | {"monitor_id": 
"85d2d8ab-32d7-43b4-a17c-e4d83964e96a", "status": "ACTIVE", 
"status_description": null} |
  | id | 8203635c-3f93-4748-abec-4e2a9f25e45d   
|
  | lb_method  | ROUND_ROBIN
|
  | members| 0751b44f-07e2-4af5-9b2b-25bed66d239e   
|
  | name   | pool_211   
|
  | protocol   | HTTP   
|
  | provider   | haproxy
|
  | status | ACTIVE 
|
  | status_description |
|
  | subnet_id  | 344b4e87-b903-4562-ba51-cea56a76df54   
|
  | tenant_id  | 44ef91b7c5c14d6a9222998b57355b18   
|
  | vip_id | 73f24ce1-dcfd-4bab-87dd-be57a2fc8478   
|
  
+-

[Yahoo-eng-team] [Bug 1250756] Re: [vpnaas] neutron-vpn-agent is trying to query br-ex link on an environment that does not use an external_network_bridge

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1250756

Title:
  [vpnaas] neutron-vpn-agent is trying to query br-ex link on an
  environment that does not use an external_network_bridge

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Version
  ===
  Havana on RHEL

  Description
  ===
  I configured my environment with empty external_network_bridge which means 
that it does not uses a br-ex as external bridge, the admin can create a 
provider external network.

  When trying to start neutron-vpn-agent service, it seems like it tries
  to check the link status on br-ex, while br-ex does not appear in any
  neutron-related configuration.

  
  From /var/log/neutron/vpn-agent.log
  ===
  2013-11-13 10:19:52.864 16442 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'br-ex'] execute 
/usr/lib/python2.6/site-packages/neutron/agent/l
  inux/utils.py:43
  2013-11-13 10:19:52.869 16442 DEBUG neutron.agent.linux.utils [-] 
  Command: ['ip', '-o', 'link', 'show', 'br-ex']
  Exit code: 255
  Stdout: ''
  Stderr: 'Device "br-ex" does not exist.\n' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60
  2013-11-13 10:19:52.870 16442 ERROR neutron.agent.l3_agent [-] Failed 
deleting namespace 'qrouter-3480f0cf-8f98-4f5c-a5fc-b422cec5a3c1'
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py", line 255, in 
_destroy_router_namespaces
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent 
self._destroy_router_namespace(ns)
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py", line 270, in 
_destroy_router_namespace
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent 
prefix=EXTERNAL_DEV_PREFIX)
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/interface.py", line 206, 
in unplug
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent 
self.check_bridge_exists(bridge)
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/interface.py", line 102, 
in check_bridge_exists
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent raise 
exceptions.BridgeDoesNotExist(bridge=bridge)
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent 
BridgeDoesNotExist: Bridge br-ex does not exist.
  2013-11-13 10:19:52.870 16442 TRACE neutron.agent.l3_agent 

  
  Looking for br-ex in neutron's configuration files
  ==

  $ grep -R external_network_bridge /etc/neutron/l3_agent.ini 
  # external_network_bridge = br-ex
  external_network_bridge = 

  $ grep -R br-ex /etc/neutron/*
  /etc/neutron/l3_agent.ini:# external_network_bridge = br-ex

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1250756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363103] Re: OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try restarting transaction') 'INSERT INTO routerl3agentbindings

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363103

Title:
  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'INSERT INTO
  routerl3agentbindings

Status in OpenStack Neutron (virtual network service):
  Expired
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Gate jobs failed on 'gate-tempest-dsvm-neutron-full' with following
  error.

  ClientException: The server has either erred or is incapable of
  performing the requested operation. (HTTP 500) (Request-ID: req-
  7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Detailed stack trace is:

  RESP BODY: {"itemNotFound": {"message": "Instance could not be found", 
"code": 404}}
  }}}

  Traceback (most recent call last):
  File "tempest/scenario/test_network_advanced_server_ops.py", line 73, in setUp
  create_kwargs=create_kwargs)
  File "tempest/scenario/manager.py", line 778, in create_server
  self.status_timeout(client.servers, server.id, 'ACTIVE')
  File "tempest/scenario/manager.py", line 572, in status_timeout
  not_found_exception=not_found_exception)
  File "tempest/scenario/manager.py", line 635, in _status_timeout
  CONF.compute.build_interval):
  File "tempest/test.py", line 614, in call_until_true
  if func():
  File "tempest/scenario/manager.py", line 606, in check_status
  thing = things.get(thing_id)
  File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 555, 
in get
  return self._get("/servers/%s" % base.getid(server), "server")
  File "/opt/stack/new/python-novaclient/novaclient/base.py", line 93, in _get
  _resp, body = self.api.client.get(url)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 487, in get
  return self._cs_request(url, 'GET', **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 465, in 
_cs_request
  resp, body = self._time_request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 439, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
  File "/opt/stack/new/python-novaclient/novaclient/client.py", line 433, in 
request
  raise exceptions.from_response(resp, body, url, method)
   ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500) (Request-ID: 
req-7d7ab999-1351-43be-bd51-96a100a7cdeb)

  Traceback (most recent call last):
  StringException: Empty attachments:
  stderr
  stdout

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384231] Re: The number of neutron-ns-metadata-proxy processes grow uncontrollably

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384231

Title:
  The number of neutron-ns-metadata-proxy processes grow uncontrollably

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  During testing and development I had to add and remove instances, routers, 
ports often. Also I restarted all neutron services often ( I use supervisor ).
  After about one week, I noticed that I ran out of free RAM. It turned out 
there were tens of neutron-ns-metadata-proxy processes hanging. After I killed 
them and restarted neutron, I got 4 GB RAM freed.

  """
  ...
  20537 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a6f6aeaa-c325-42d6-95e2-d55d410fc5d9.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a6f6aeaa-c325-42d6-95e2-d55d410fc5d9 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  20816 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a4451c09-1655-4aea-86d6-849e563f4731.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a4451c09-1655-4aea-86d6-849e563f4731 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  30098 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b122a6ba-5614-4f1c-b0c6-95c6645dbab0.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b122a6ba-5614-4f1c-b0c6-95c6645dbab0 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  30557 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/82ebd418-b156-49bf-9633-af3121fc12f7.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=82ebd418-b156-49bf-9633-af3121fc12f7 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  31072 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/d426f959-bfc5-4012-b89e-aec64cc2cf03.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=d426f959-bfc5-4012-b89e-aec64cc2cf03 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  31378 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b8dc2dd7-18cb-4a56-9690-fc79248c5532.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b8dc2dd7-18cb-4a56-9690-fc79248c5532 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  ...
  """

  
  I use Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187578] Re: keystone credentials are replicated in multiple config files

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1187578

Title:
  keystone credentials are replicated in multiple config files

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  The keystone credentials are repeated in multiple configuration files.
  We should update the code to use the values defined in quantum.conf
  and remove the duplicate values in the other .ini files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1187578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260185] Re: synchronize ovs agent's rpc handler and main thread

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260185

Title:
  synchronize ovs agent's rpc handler and main thread

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  the various RPC message handlers and the main thread of ovs agent can 
interfere with each other.
  So we need to use synchronizer  to make sure only one thread can run at a 
time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394941] Re: Update port's fixed-ip breaks the existed floating-ip

2015-01-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394941

Title:
  Update port's fixed-ip breaks the existed floating-ip

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Use port-update to change fixed-ip of a given port. But found it
  failed with the existing floating-ip setup.

  Although floating-ip is managed by l3-agent, it is the case that
  neutron should internally deal with it because it will break the
  network connection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413056] [NEW] OVS agent supports arp responder for VLAN

2015-01-20 Thread Zhiyuan Cai
Public bug reported:

This commit[1] introduces a new agent configuration
"l2pop_network_types". In ofagent, this configuration is used to enable
arp responder for non-tunnel network types like VLAN, using the l2pop
information. I think we can also bring this feature to OVS agent.

[1] https://review.openstack.org/#/c/112947

** Affects: neutron
 Importance: Undecided
 Assignee: Zhiyuan Cai (luckyvega-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Zhiyuan Cai (luckyvega-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413056

Title:
  OVS agent supports arp responder for VLAN

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This commit[1] introduces a new agent configuration
  "l2pop_network_types". In ofagent, this configuration is used to
  enable arp responder for non-tunnel network types like VLAN, using the
  l2pop information. I think we can also bring this feature to OVS
  agent.

  [1] https://review.openstack.org/#/c/112947

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413049] [NEW] _gather_port_ids_and_networks relies on cache being populated

2015-01-20 Thread Don Bowman
Public bug reported:

in nova/network/neutronv2/api.py, in _gather_port_ids_and_networks, it calls:
ifaces = compute_utils.get_nw_info_for_instance(instance)

but this can return an empty list if the network info cache is not up to date 
or wrong. 
Instead I think it should call neutron list_ports.

a proposed patch is below.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413049

Title:
  _gather_port_ids_and_networks relies on cache being populated

Status in OpenStack Compute (Nova):
  New

Bug description:
  in nova/network/neutronv2/api.py, in _gather_port_ids_and_networks, it calls:
  ifaces = compute_utils.get_nw_info_for_instance(instance)

  but this can return an empty list if the network info cache is not up to date 
or wrong. 
  Instead I think it should call neutron list_ports.

  a proposed patch is below.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1413049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413041] [NEW] for many of endpoint classes, which methods are for RPC is not clear

2015-01-20 Thread YAMAMOTO Takashi
Public bug reported:

as discussed in https://review.openstack.org/#/c/130676/ ,
for many of RPC endpoint classes, there are no clear separation between 
internal methods and RPC methods.
heavy uses of mixing made the situation worse.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413041

Title:
  for many of endpoint classes, which methods are for RPC is not clear

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  as discussed in https://review.openstack.org/#/c/130676/ ,
  for many of RPC endpoint classes, there are no clear separation between 
internal methods and RPC methods.
  heavy uses of mixing made the situation worse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413019] [NEW] Floating IPs are not removed from compute when using nova-network

2015-01-20 Thread David Hill
Public bug reported:

With 2014.1.3, when assigning a floating ips to a compute, it won't get
removed from the compute when unassigning it.  I notice there're lots of
changes in this regards between 2014.1.2 and 2014.1.3 so perhaps a bug
was introduced here and there.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: fixed floating icehouse ips nova-network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413019

Title:
  Floating IPs are not removed from compute when using nova-network

Status in OpenStack Compute (Nova):
  New

Bug description:
  With 2014.1.3, when assigning a floating ips to a compute, it won't
  get removed from the compute when unassigning it.  I notice there're
  lots of changes in this regards between 2014.1.2 and 2014.1.3 so
  perhaps a bug was introduced here and there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1413019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413012] [NEW] check-tempest-dsvm-nova-v21-full failing in affinity validation

2015-01-20 Thread Adam Gandelman
Public bug reported:

test_server_groups is failing in the check-tempest-dsvm-nova-v21-full
job, which votes in check/gate for tempest and nova.  Not certain this
is a nova bug or a tempest one, but reproducible locally running the
tempest test against the v2.1 API:

2015-01-20 22:22:15.096 | BadRequest: Bad request
2015-01-20 22:22:15.096 | Details: Bad request
2015-01-20 22:22:15.096 | Details: {u'message': u"Invalid input for 
field/attribute 0. Value: affinity. {'allOf': [{'enum': 'affinity'}, {'enum': 
'anti-affinity'}]} is not allowed for u'affinity'", u'code': 400}

Not sure why logstash is only picking up one patch here, its failing all
over the place, this should produce more hits:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaXMgbm90IGFsbG93ZWQgZm9yIHUnYWZmaW5pdHknXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjE3OTYxOTE0NTJ9

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413012

Title:
  check-tempest-dsvm-nova-v21-full failing in affinity validation

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  test_server_groups is failing in the check-tempest-dsvm-nova-v21-full
  job, which votes in check/gate for tempest and nova.  Not certain this
  is a nova bug or a tempest one, but reproducible locally running the
  tempest test against the v2.1 API:

  2015-01-20 22:22:15.096 | BadRequest: Bad request
  2015-01-20 22:22:15.096 | Details: Bad request
  2015-01-20 22:22:15.096 | Details: {u'message': u"Invalid input for 
field/attribute 0. Value: affinity. {'allOf': [{'enum': 'affinity'}, {'enum': 
'anti-affinity'}]} is not allowed for u'affinity'", u'code': 400}

  Not sure why logstash is only picking up one patch here, its failing
  all over the place, this should produce more hits:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaXMgbm90IGFsbG93ZWQgZm9yIHUnYWZmaW5pdHknXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjE3OTYxOTE0NTJ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1413012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412769] Re: Flavor ID changed from number to UUID4 when edited

2015-01-20 Thread Alex Chan
This is the intended behavior since Nova uses UUID for flavor IDs.  When
an update to a flavor occurs, the modified flavor is marked for deletion
and a new flavor is created with the updated settings.  Please see this
commit hash:

https://github.com/openstack/horizon/commit/4100a1cbc24184b58d5049dfb601b18e29e6107d
#diff-fa23c644e658f89cdf7d0cf65a0c0556

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412769

Title:
  Flavor ID changed from number to UUID4 when edited

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Flavor ID changes from numeric format to UUID4 format after doing some
  changes into flavor.

  Steps to reproduce:

  1) Login to Horizon as admin.

  2) Navigate to Admin->System->Flavors.

  3) See list of Flavors.
  Note that every Flavor has column with numerical  property "ID".

  4) Click "Edit Flavor" button for any Flavor.

  5) Do some change, for example in field Name of flavor.  Then click
  "Save".

  Actual: value in field ID changed to UUID4 format.  See screenshot.

  Expected: Flavor ID not intended to be changed.

  
  Environment:
  Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"

  Browsers:
  Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
  Firefox: 35.0 on Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412993] [NEW] Nova resize for boot-from-volume instance does not resize volume

2015-01-20 Thread Claudiu Belu
Public bug reported:

Resizing an instance which booted from a volume to a new flavor with a
bigger disk will not cause the volume to resize accordingly. This can
cause confusion among the users, which will expect to have instances
with bigger storage.

Scenario:
1. Have a glance image.
2. Create a bootable volume from glance image.
3. Create instance using volume and flavor having 10GB disk.
4. Perform nova resize on instance to a new flavor having 20GB disk.
5. After resize, see that the instance still has 10GB storage. Cinder volume 
still has the same size.

This issue has been discussed on #openstack-nova and it was agreed upon
to fail the resize operation, if the given instance is booted from
volume and the given new flavor has a different disk size.

** Affects: nova
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New


** Tags: volumes

** Changed in: nova
 Assignee: (unassigned) => Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412993

Title:
  Nova resize for boot-from-volume instance does not resize volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Resizing an instance which booted from a volume to a new flavor with a
  bigger disk will not cause the volume to resize accordingly. This can
  cause confusion among the users, which will expect to have instances
  with bigger storage.

  Scenario:
  1. Have a glance image.
  2. Create a bootable volume from glance image.
  3. Create instance using volume and flavor having 10GB disk.
  4. Perform nova resize on instance to a new flavor having 20GB disk.
  5. After resize, see that the instance still has 10GB storage. Cinder volume 
still has the same size.

  This issue has been discussed on #openstack-nova and it was agreed
  upon to fail the resize operation, if the given instance is booted
  from volume and the given new flavor has a different disk size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412994] [NEW] cirros image fails to boot when libvirt.cpu_mode=custom and libvirt.cpu_model=pentium

2015-01-20 Thread jiang, yunhong
Public bug reported:

I try to set the libvirt.cpu_mode=custom and libvirt.cpu_model=pentium,
and then boot a vm using cirros image. The 'nova boot' command create
the instance successfully while the instance failed to boot because the
image requires x86_64 while pentium has no x86_64 support.

Then I try to  set the image's architecture property to be x86_64, with
"glance image-update IMG-UUID --property architecture=x86_64". After
this, the instance is created successfully by 'nova boot' but instance
still fails to boot.

I think the reason is because, when libvirt_cpu_model=perntium, the
compute node should in fact report it's capability as that of Pertium
processor, instead of the host processor anymore.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412994

Title:
  cirros image fails to boot when libvirt.cpu_mode=custom and
  libvirt.cpu_model=pentium

Status in OpenStack Compute (Nova):
  New

Bug description:
  I try to set the libvirt.cpu_mode=custom and
  libvirt.cpu_model=pentium, and then boot a vm using cirros image. The
  'nova boot' command create the instance successfully while the
  instance failed to boot because the image requires x86_64 while
  pentium has no x86_64 support.

  Then I try to  set the image's architecture property to be x86_64,
  with "glance image-update IMG-UUID --property architecture=x86_64".
  After this, the instance is created successfully by 'nova boot' but
  instance still fails to boot.

  I think the reason is because, when libvirt_cpu_model=perntium, the
  compute node should in fact report it's capability as that of Pertium
  processor, instead of the host processor anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412974] [NEW] Document setup when running selenium headless

2015-01-20 Thread Lin Hua Cheng
Public bug reported:

xvfb must be installed when running selenium headless, add this to
documentation.

** Affects: horizon
 Importance: Wishlist
 Status: New

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412974

Title:
  Document setup when running selenium headless

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  xvfb must be installed when running selenium headless, add this to
  documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412973] [NEW] test_create_server_with_scheduler_hint_group fails

2015-01-20 Thread Ken'ichi Ohmichi
Public bug reported:

test_create_server_with_scheduler_hint_group fails like the following:

Traceback (most recent call last):
  File "tempest/api/compute/servers/test_create_server.py", line 112, in 
test_create_server_with_scheduler_hint_group
policies=policies)
  File "tempest/services/compute/json/servers_client.py", line 539, in 
create_server_group
resp, body = self.post('os-server-groups', post_body)
  File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 168, in post
return self.request('POST', url, extra_headers, headers, body)
  File "tempest/common/service_client.py", line 67, in request
raise exceptions.BadRequest(ex)
BadRequest: Bad request
Details: Bad request
Details: {u'message': u"Invalid input for field/attribute 0. Value: affinity. 
{'allOf': [{'enum': 'affinity'}, {'enum': 'anti-affinity'}]} is not allowed for 
u'affinity'", u'code': 400}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412973

Title:
  test_create_server_with_scheduler_hint_group fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  test_create_server_with_scheduler_hint_group fails like the following:

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_create_server.py", line 112, in 
test_create_server_with_scheduler_hint_group
  policies=policies)
File "tempest/services/compute/json/servers_client.py", line 539, in 
create_server_group
  resp, body = self.post('os-server-groups', post_body)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 168, in post
  return self.request('POST', url, extra_headers, headers, body)
File "tempest/common/service_client.py", line 67, in request
  raise exceptions.BadRequest(ex)
  BadRequest: Bad request
  Details: Bad request
  Details: {u'message': u"Invalid input for field/attribute 0. Value: affinity. 
{'allOf': [{'enum': 'affinity'}, {'enum': 'anti-affinity'}]} is not allowed for 
u'affinity'", u'code': 400}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412971] [NEW] Reduce the number of document jquery objects created

2015-01-20 Thread mattfarina
Public bug reported:

The horizon codebase regularly creates jQuery objects based on the
document (`$(document)`). This can happen many times in the same scope.
For example,
https://github.com/openstack/horizon/blob/1385db8d1f8358aca190a40ed4c341bfc3e46f56/horizon/static/horizon/js/horizon.instances.js
shows it happening 8 different times.

This could be done once, saved to a variable, and then reused. Creating
a jQuery object isn't cheap. By doing it many times as opposed to 1 we
are causing more logic to fire, more memory to be allocated, and more
work to happen in the browsers garbage collection.

By moving to a single variable and repeatedly using it we will use less
memory, cause horizon to be faster, and other performance benefits.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412971

Title:
  Reduce the number of document jquery objects created

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The horizon codebase regularly creates jQuery objects based on the
  document (`$(document)`). This can happen many times in the same
  scope. For example,
  
https://github.com/openstack/horizon/blob/1385db8d1f8358aca190a40ed4c341bfc3e46f56/horizon/static/horizon/js/horizon.instances.js
  shows it happening 8 different times.

  This could be done once, saved to a variable, and then reused.
  Creating a jQuery object isn't cheap. By doing it many times as
  opposed to 1 we are causing more logic to fire, more memory to be
  allocated, and more work to happen in the browsers garbage collection.

  By moving to a single variable and repeatedly using it we will use
  less memory, cause horizon to be faster, and other performance
  benefits.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412930] [NEW] CPU feature request is not respected when create VM

2015-01-20 Thread jiang, yunhong
Public bug reported:

When creating a VM requiring some specific CPU features, although the
guest will be scheduled to a host with the capability, but such
information is not  always exposed to guest.

a) started devstack by default, (means nova configuration 
libvirt.cpu_mode=none), the compute node has the SSE4.1 support
b) create flavor with {"capabilities:cpu_info:features": " sse4.1"} 
extra spec, i.e. requiring SSE4.1.
c) launch an instance with the flavor.
d) ssh to instance and 'cat /proc/cpuinfo'.

The SSE4.1 is not presented in guest.

I think the reason is because when libvirt.cpu_mode is none, libvirt
will not specify the feature request in the guest  vcpu config.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412930

Title:
  CPU feature request is not respected when create VM

Status in OpenStack Compute (Nova):
  New

Bug description:
  When creating a VM requiring some specific CPU features, although the
  guest will be scheduled to a host with the capability, but such
  information is not  always exposed to guest.

a) started devstack by default, (means nova configuration 
libvirt.cpu_mode=none), the compute node has the SSE4.1 support
  b) create flavor with {"capabilities:cpu_info:features": " 
sse4.1"} extra spec, i.e. requiring SSE4.1.
c) launch an instance with the flavor.
  d) ssh to instance and 'cat /proc/cpuinfo'.

  The SSE4.1 is not presented in guest.

  I think the reason is because when libvirt.cpu_mode is none, libvirt
  will not specify the feature request in the guest  vcpu config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412581] Re: Integration test failing in horizon

2015-01-20 Thread Thai Tran
Not an error, did not modify correct configuration for integration test.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412581

Title:
  Integration test failing in horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Steps to reproduce this error:

  1. Check out the latest master branch (currently on 
f64664ddc54f24477c59404e84d7ec5d9bb1d88e).
  2. Then run the integration tests: "./run_tests.sh --integration"
  3. See error

  I have also tried going back a few patches, but it seems like we have had 
this error for quite sometime.
  Here is the complete trace: http://paste.openstack.org/show/158986/

  $ ./run_tests.sh --integration
  Checking environment.
  Environment is up to date.
  Running Horizon integration tests...
  
openstack_dashboard.test.integration_tests.tests.test_dashboard_help_redirection.TestDashboardHelp.test_dashboard_help_redirection
 ... ERROR
  
openstack_dashboard.test.integration_tests.tests.test_flavors.TestFlavors.test_flavor_create
 ... ERROR
  
openstack_dashboard.test.integration_tests.tests.test_image_create_delete.TestImage.test_image_create_delete
 ... ERROR
  
openstack_dashboard.test.integration_tests.tests.test_keypair.TestKeypair.test_keypair
 ... ERROR
  
openstack_dashboard.test.integration_tests.tests.test_login.TestLogin.test_login
 ... FAIL
  
openstack_dashboard.test.integration_tests.tests.test_password_change.TestPasswordChange.test_password_change
 ... ERROR
  
openstack_dashboard.test.integration_tests.tests.test_user_settings.TestUserSettings.test_user_settings_change
 ... ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412903] [NEW] Cisco Nexus VxLAN ML2: ping fails with fixed IP with remote Network Node

2015-01-20 Thread Alec Hothan
Public bug reported:

Testbed configuration to reproduce:
- 1 TOR with 1 or 2 compute nodes
- 1 TOR with 1 network node
- 1 Spine connecting the 2 TOR

data path: VM1->TOR1->Spine->TOR2->router->TOR2->Spine->TOR1->VM2
VM1 and VM2 are on different networks
The TOR are Nexus 9K (C9396PX NXOS 6.1)

Ping from VM1 to VM2 using fixed IP fails 100% of the time.
The test was performed with VM1 and VM2 running on different compute nodes 
under TOR1. It will also probably fail if they run on the same compute node 
under TOR1.

Other data paths where ping is working:
VM1->TOR->router->TOR->VM2 (VM1 and VM2 are in the same rack as the network 
node and run on the same compute node)
VM1->TOR1->Spine->TOR2->router->TOR2->VM2 (VM2 is on the same rack as the 
network node)

When using floating IPs (ie going through NAT) ping on:
- VM in same rack as network node is OK: VM1->TOR->router->->TOR->default 
gateway->TOR->VM2
- at least one VM in a different rack than network node fails

These issues can be reproduced manually using ping or in an automated
way using the VMTP tool.

Versioning:
https://github.com/cisco-openstack/neutron.git (staging/kiloplus branch)
openstack and devstack: stable/juno

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: cisco ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412903

Title:
  Cisco Nexus VxLAN ML2: ping fails with fixed IP with remote Network
  Node

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Testbed configuration to reproduce:
  - 1 TOR with 1 or 2 compute nodes
  - 1 TOR with 1 network node
  - 1 Spine connecting the 2 TOR

  data path: VM1->TOR1->Spine->TOR2->router->TOR2->Spine->TOR1->VM2
  VM1 and VM2 are on different networks
  The TOR are Nexus 9K (C9396PX NXOS 6.1)

  Ping from VM1 to VM2 using fixed IP fails 100% of the time.
  The test was performed with VM1 and VM2 running on different compute nodes 
under TOR1. It will also probably fail if they run on the same compute node 
under TOR1.

  Other data paths where ping is working:
  VM1->TOR->router->TOR->VM2 (VM1 and VM2 are in the same rack as the network 
node and run on the same compute node)
  VM1->TOR1->Spine->TOR2->router->TOR2->VM2 (VM2 is on the same rack as the 
network node)

  When using floating IPs (ie going through NAT) ping on:
  - VM in same rack as network node is OK: VM1->TOR->router->->TOR->default 
gateway->TOR->VM2
  - at least one VM in a different rack than network node fails

  These issues can be reproduced manually using ping or in an automated
  way using the VMTP tool.

  Versioning:
  https://github.com/cisco-openstack/neutron.git (staging/kiloplus branch)
  openstack and devstack: stable/juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412900] [NEW] Cisco Nexus VxLAN ML2: UDP large packets dropped in L3 data path

2015-01-20 Thread Alec Hothan
Public bug reported:


UDP packets larger than 1428 bytes are being dropped (100% packet loss) in an 
L3 configuration.
Testbed configuration to reproduce:
- 1 TOR N9K with 1 compute node
- 1 TOR N9K with 1 network node and 1 compute node
- 1 Spine connecting the 2 TOR
Run any UDP test between VM hosted on each compute node, on different virtual 
networks.

Data path: VM1->TOR1->Spine->TOR2->router->TOR2->VM2

The UDP test with any packet size <= 1428 passes always. Over 1428 bytes it 
always fails.
Can be reproduced manually or in an automated way using the VMTP test tool.

 In an alternate configuration, where the 2 VMs reside on the same compute node 
(still different L3 network) and same rack as the Network node, the UDP traffic 
works fine regardless of packet size.
VM1->TOR->router->TOR->VM2

This seems to indicate the packet drop is happening only when the VLAN
tag/VNI mapping is used.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: cisco ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412900

Title:
  Cisco Nexus VxLAN ML2: UDP large packets dropped in L3 data path

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  UDP packets larger than 1428 bytes are being dropped (100% packet loss) in an 
L3 configuration.
  Testbed configuration to reproduce:
  - 1 TOR N9K with 1 compute node
  - 1 TOR N9K with 1 network node and 1 compute node
  - 1 Spine connecting the 2 TOR
  Run any UDP test between VM hosted on each compute node, on different virtual 
networks.

  Data path: VM1->TOR1->Spine->TOR2->router->TOR2->VM2

  The UDP test with any packet size <= 1428 passes always. Over 1428 bytes it 
always fails.
  Can be reproduced manually or in an automated way using the VMTP test tool.

   In an alternate configuration, where the 2 VMs reside on the same compute 
node (still different L3 network) and same rack as the Network node, the UDP 
traffic works fine regardless of packet size.
  VM1->TOR->router->TOR->VM2

  This seems to indicate the packet drop is happening only when the VLAN
  tag/VNI mapping is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412882] [NEW] Images: 256 character limits isn't accurate (bytes limit?)

2015-01-20 Thread Julie Pichon
Public bug reported:

The 256 characters limit for Images doesn't seem quite accurate. If you
try to set up a long name with special characters (less than 256
characters, but more than 256 bytes?) it fails.

A user reported:

1. Create a vm
2. Create a snapshot from the vm and the name of the snapshot should be more 
than 256 bytes. For example: 
を引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様は
3. Horizon fails with a generic error.

But actually it is the same with e.g. just creating a new image (The
error returned is the generic: "Unable to create new image. ")

We see this in the logs:

[Tue Jan 20 16:40:45.594430 2015] [:error] [pid 17230] Recoverable error: 
[Tue Jan 20 16:40:45.594454 2015] [:error] [pid 17230]  
[Tue Jan 20 16:40:45.594458 2015] [:error] [pid 17230]   400 Bad 
Request
[Tue Jan 20 16:40:45.594460 2015] [:error] [pid 17230]  
[Tue Jan 20 16:40:45.594462 2015] [:error] [pid 17230]  
[Tue Jan 20 16:40:45.594465 2015] [:error] [pid 17230]   400 Bad 
Request
[Tue Jan 20 16:40:45.594467 2015] [:error] [pid 17230]   Image name too long: 
405
[Tue Jan 20 16:40:45.594469 2015] [:error] [pid 17230] 
[Tue Jan 20 16:40:45.594472 2015] [:error] [pid 17230]  
[Tue Jan 20 16:40:45.594474 2015] [:error] [pid 17230]  (HTTP 400)

So Glance does return a more meaningful error, that we should try to
present to the user.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: image

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412882

Title:
  Images: 256 character limits isn't accurate (bytes limit?)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The 256 characters limit for Images doesn't seem quite accurate. If
  you try to set up a long name with special characters (less than 256
  characters, but more than 256 bytes?) it fails.

  A user reported:

  1. Create a vm
  2. Create a snapshot from the vm and the name of the snapshot should be more 
than 256 bytes. For example: 
を引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様はを引き続きご使用になるお客様は
  3. Horizon fails with a generic error.

  But actually it is the same with e.g. just creating a new image (The
  error returned is the generic: "Unable to create new image. ")

  We see this in the logs:

  [Tue Jan 20 16:40:45.594430 2015] [:error] [pid 17230] Recoverable error: 

  [Tue Jan 20 16:40:45.594454 2015] [:error] [pid 17230]  
  [Tue Jan 20 16:40:45.594458 2015] [:error] [pid 17230]   400 Bad 
Request
  [Tue Jan 20 16:40:45.594460 2015] [:error] [pid 17230]  
  [Tue Jan 20 16:40:45.594462 2015] [:error] [pid 17230]  
  [Tue Jan 20 16:40:45.594465 2015] [:error] [pid 17230]   400 Bad 
Request
  [Tue Jan 20 16:40:45.594467 2015] [:error] [pid 17230]   Image name too long: 
405
  [Tue Jan 20 16:40:45.594469 2015] [:error] [pid 17230] 
  [Tue Jan 20 16:40:45.594472 2015] [:error] [pid 17230]  
  [Tue Jan 20 16:40:45.594474 2015] [:error] [pid 17230]  (HTTP 400)

  So Glance does return a more meaningful error, that we should try to
  present to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412870] [NEW] PEP8 issues in the templates used to create new panels

2015-01-20 Thread Christian Berendt
Public bug reported:

There are several PEP8 issues in the templates used to create new panels
used in the "Tutorial: Building a Dashboard using Horizon" document.

* H304  No relative imports. 'from .views import IndexView' is a relative import
* E128 continuation line under-indented for visual indent

** Affects: horizon
 Importance: Undecided
 Assignee: Christian Berendt (berendt)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412870

Title:
  PEP8 issues in the templates used to create new panels

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are several PEP8 issues in the templates used to create new
  panels used in the "Tutorial: Building a Dashboard using Horizon"
  document.

  * H304  No relative imports. 'from .views import IndexView' is a relative 
import
  * E128 continuation line under-indented for visual indent

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412855] Re: Horizon logs in with unencrypted credentials

2015-01-20 Thread Lin Hua Cheng
This is not a Horizon issue, this is a deployment issue. Perhaps an
opportunity for Fuel installer.

We do already have documented in the security guide that HTTPS should be
used: http://docs.openstack.org/security-guide/content/ch025_web-
dashboard.html

** Also affects: fuel
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412855

Title:
  Horizon logs in with unencrypted credentials

Status in Fuel: OpenStack installer that works:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Horizon logs-in with  unencrypted credentials over HTTP.

  Steps:
  1) Open browser development tools.
  2) Log-in to Horizon
  3) Find POST request with "/horizon/auth/login" path.

  Request details:

  Remote Address:172.16.0.2:80
  Request URL:http://172.16.0.2/horizon/auth/login/
  Request Method:POST
  Status Code:302 FOUND
  Form Data:
  
csrfmiddlewaretoken=ulASpgYAsaikVCWsBxH6kFN2MECpaT9Y®ion=http%3A%2F%2F192.168.0.1%3A5000%2Fv2.0&username=admin&password=admin

  Actual: security settings are applied on stage of product deployment

  Expected: use HTTPS by default to improve infrastructure security on
  stage of installation and deployment.

  Environment:
  Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"
  Dashboard Version: 2014.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1412855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412841] Re: RPCFixture monkey patches private property in oslo.messaging

2015-01-20 Thread Doug Hellmann
I'm going to add the symbol to oslo.messaging, but when we remove the
namespace package entirely the unit tests will still break so nova's
tests should be updated to remove the mock call.

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging
   Status: New => Triaged

** Changed in: oslo.messaging
   Importance: Undecided => Critical

** Changed in: oslo.messaging
 Assignee: (unassigned) => Doug Hellmann (doug-hellmann)

** Changed in: oslo.messaging
Milestone: None => next-kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412841

Title:
  RPCFixture monkey patches private property in oslo.messaging

Status in OpenStack Compute (Nova):
  New
Status in Messaging API for OpenStack:
  Triaged

Bug description:
  nova/tests/fixtures.py refers to a private property inside
  oslo.messaging which is moving to a new location.

  http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/fixtures.py#n240

  http://lists.openstack.org/pipermail/openstack-
  dev/2015-January/054810.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412846] [NEW] Cannot chain a trust with a role specified by name

2015-01-20 Thread Alexander Makarov
Public bug reported:

>From comment in https://review.openstack.org/#/c/126897/

Hi! The new feature is great, but (unless I did a mistake somewhere) I
cannot create a chained trust specifying roles with "name" as opposed to
"id".

Here's a sample trust POST:
{"trust":{"trustor_user_id":"...","trustee_user_id":"...","project_id":"...","impersonation":true,"roles":[{"name":"admin"}]}}

And an accompanying traceback:

2015-01-19 17:12:36.953 4246 ERROR keystone.common.wsgi [-] 'id'
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 223, in 
__call__
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi result = 
method(context, **params)
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 158, in 
inner
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi return f(self, 
context, *args, **kwargs)
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/validation/__init__.py", line 
36, in wrapper
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi return func(*args, 
**kwargs)
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/trust/controllers.py", line 163, in 
create_trust
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi redelegated_trust)
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/notifications.py", line 93, in 
wrapper
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi result = f(*args, 
**kwargs)
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/trust/core.py", line 165, in 
create_trust
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi 
self._validate_redelegation(t, trust)
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/trust/core.py", line 85, in 
_validate_redelegation
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi if not 
all(role['id'] in parent_roles for role in trust['roles']):
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/trust/core.py", line 85, in 

2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi if not 
all(role['id'] in parent_roles for role in trust['roles']):
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi KeyError: 'id'
2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi

** Affects: keystone
 Importance: Undecided
 Assignee: Alexander Makarov (amakarov)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Alexander Makarov (amakarov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1412846

Title:
  Cannot chain a trust with a role specified by name

Status in OpenStack Identity (Keystone):
  New

Bug description:
  From comment in https://review.openstack.org/#/c/126897/

  Hi! The new feature is great, but (unless I did a mistake somewhere) I
  cannot create a chained trust specifying roles with "name" as opposed
  to "id".

  Here's a sample trust POST:
  
{"trust":{"trustor_user_id":"...","trustee_user_id":"...","project_id":"...","impersonation":true,"roles":[{"name":"admin"}]}}

  And an accompanying traceback:

  2015-01-19 17:12:36.953 4246 ERROR keystone.common.wsgi [-] 'id'
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 223, in 
__call__
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi result = 
method(context, **params)
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 158, in 
inner
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi return f(self, 
context, *args, **kwargs)
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/validation/__init__.py", line 
36, in wrapper
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi return 
func(*args, **kwargs)
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/trust/controllers.py", line 163, in 
create_trust
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi redelegated_trust)
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/notifications.py", line 93, in 
wrapper
  2015-01-19 17:12:36.953 4246 TRACE keystone.common.wsgi

[Yahoo-eng-team] [Bug 1412841] [NEW] RPCFixture monkey patches private property in oslo.messaging

2015-01-20 Thread Doug Hellmann
Public bug reported:

nova/tests/fixtures.py refers to a private property inside
oslo.messaging which is moving to a new location.

http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/fixtures.py#n240

http://lists.openstack.org/pipermail/openstack-
dev/2015-January/054810.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412841

Title:
  RPCFixture monkey patches private property in oslo.messaging

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova/tests/fixtures.py refers to a private property inside
  oslo.messaging which is moving to a new location.

  http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/fixtures.py#n240

  http://lists.openstack.org/pipermail/openstack-
  dev/2015-January/054810.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406784] Re: Can't create volume from non-raw image

2015-01-20 Thread vishal yadav
** Project changed: nova => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406784

Title:
  Can't create volume from non-raw image

Status in Cinder:
  Invalid
Status in OpenStack Manuals:
  Confirmed

Bug description:
  1. Create an image using a non-raw image (qcow2 or vmdk is ok)
  2. Copy the image to a volume,  and failed.

  Log:
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 363, in 
create_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
_run_flow()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 356, in 
_run_flow
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/lock_utils.py", line 53, in 
wrapper
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 111, in run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 121, in _run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._revert(misc.Failure())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", 
line 78, in _revert
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py", line 558, in 
reraise_if_any
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py", line 565, in reraise
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", 
line 36, in _execute_task
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 594, in execute
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 556, in _create_from_image
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
image_id, image_location, image_service)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py",
 line 463, in _copy_image_to_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher raise 
exception.ImageUnacceptable(ex)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
ImageUnacceptable: Image 92fad7ae-6439-4c69-bdf4-4c6cc5759225 is unacceptable: 
qemu-img is not installed and image is of type vmdk.  Only RAW images can be 
used if qemu-img is not installed.
  2014-12-31 07:06:09.299 2159 TRACE oslo.mess

[Yahoo-eng-team] [Bug 1408663] Re: [OSSA-2015-002] Glance still allows users to download and delete any file in glance-api server (CVE-2015-1195)

2015-01-20 Thread Tristan Cacqueray
** Changed in: ossa
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1408663

Title:
  [OSSA-2015-002] Glance still allows users to download and delete any
  file in glance-api server (CVE-2015-1195)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance icehouse series:
  Fix Committed
Status in Glance juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Jin Liu reported that OSSA-2014-041 (CVE-2014-9493) only fixed the
  vulnerability for swift: and file: URI, but overlooked filesystem:
  URIs.

  Please see bug 1400966 for historical reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1408663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412802] Re: copy-from broken for large files and swift

2015-01-20 Thread nikhil komawar
** Changed in: glance
   Importance: Undecided => Medium

** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412802

Title:
  copy-from broken for large files and swift

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  New
Status in Glance juno series:
  New

Bug description:
  Glance may loose some image data while transferring it to the backing
  store thus corrupting the image when ALL the following conditions are
  met:

  - Image is being created by copying data from remote source (--copy-from CLI 
parameter or appropriate API call)
  - Backing store is Swift
  - Image size is larger then configured "swift_store_large_object_size"

  In such scenarios the last chunk stored in Swift will have the size
  significantly less then expected. An attempt to download the image
  will result in a checksum verification error, however the checksum
  stored in Glance (with image metadata) is correct, and so is the size.

  This is easily reproducible even on devstack (if the devstack is
  configured to run Swift as Glance backend). Just decrease
  'swift_store_large_object_size' to some reasonably low value (i.e. 200
  Mb) and try to copy-from any image which is larger then that value.
  After the upload is successful, check the object size in swift (by
  either summing the sizes of all the chunks or by looking to the size
  of virtual large object) - they will be lower then expected:

  
  glance image-create --name tst --disk-format qcow2 --container-format bare 
--copy-from http://192.168.56.1:8000/F18-x86_64-cfntools.qcow2

  ...

  glance image-list
  
+--+-+-+--+---++
  | ID   | Name| 
Disk Format | Container Format | Size  | Status |
  
+--+-+-+--+---++
  | fc34ec49-4bd3-40dd-918f-44d3254f2ac9 | tst | 
qcow2   | bare | 536412160 | active |
  
+--+-+-+--+---++

  ...

  swift stat glance fc34ec49-4bd3-40dd-918f-44d3254f2ac9 --os-tenant-name 
service --os-username admin
 Account: AUTH_cce6e9c12fa34c63b64ef29e84861554
   Container: glance
  Object: fc34ec49-4bd3-40dd-918f-44d3254f2ac9
Content Type: application/octet-stream
  Content Length: 509804544 < see, the size is 
different!
   Last Modified: Mon, 19 Jan 2015 15:52:18 GMT
ETag: "6d0612f82db9a531b34d25823a45073d"
Manifest: glance/fc34ec49-4bd3-40dd-918f-44d3254f2ac9-
   Accept-Ranges: bytes
 X-Timestamp: 1421682737.01148
  X-Trans-Id: tx01a19f7476a541808c9a1-0054bd28e1

  

  glance image-download tst --file out.qcow2
  [Errno 32] Corrupt image download. Checksum was 
0eeddae1007f01b0029136d28518f538 expected 3ecddfe0787a392960d230c87a421c6a

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1412802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412818] [NEW] DHCP agent dnsmasq driver should not check dnsmasq version at runtime

2015-01-20 Thread Ihar Hrachyshka
Public bug reported:

Dumb version check is too unsafe, for there are environments with
extensive backports to dnsmasq, where all the needed features are
present in dnsmasq, but version string is still 'old'. We should not
apply such unsafe checks (and exit the agent!) in the service.

** Affects: neutron
 Importance: Wishlist
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412818

Title:
  DHCP agent dnsmasq driver should not check dnsmasq version at runtime

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Dumb version check is too unsafe, for there are environments with
  extensive backports to dnsmasq, where all the needed features are
  present in dnsmasq, but version string is still 'old'. We should not
  apply such unsafe checks (and exit the agent!) in the service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412802] [NEW] copy-from broken for large files and swift

2015-01-20 Thread Alexander Tivelkov
Public bug reported:

Glance may loose some image data while transferring it to the backing
store thus corrupting the image when ALL the following conditions are
met:

- Image is being created by copying data from remote source (--copy-from CLI 
parameter or appropriate API call)
- Backing store is Swift
- Image size is larger then configured "swift_store_large_object_size"

In such scenarios the last chunk stored in Swift will have the size
significantly less then expected. An attempt to download the image will
result in a checksum verification error, however the checksum stored in
Glance (with image metadata) is correct, and so is the size.

This is easily reproducible even on devstack (if the devstack is
configured to run Swift as Glance backend). Just decrease
'swift_store_large_object_size' to some reasonably low value (i.e. 200
Mb) and try to copy-from any image which is larger then that value.
After the upload is successful, check the object size in swift (by
either summing the sizes of all the chunks or by looking to the size of
virtual large object) - they will be lower then expected:


glance image-create --name tst --disk-format qcow2 --container-format bare 
--copy-from http://192.168.56.1:8000/F18-x86_64-cfntools.qcow2

...

glance image-list
+--+-+-+--+---++
| ID   | Name| Disk 
Format | Container Format | Size  | Status |
+--+-+-+--+---++
| fc34ec49-4bd3-40dd-918f-44d3254f2ac9 | tst | 
qcow2   | bare | 536412160 | active |
+--+-+-+--+---++

...

swift stat glance fc34ec49-4bd3-40dd-918f-44d3254f2ac9 --os-tenant-name service 
--os-username admin
   Account: AUTH_cce6e9c12fa34c63b64ef29e84861554
 Container: glance
Object: fc34ec49-4bd3-40dd-918f-44d3254f2ac9
  Content Type: application/octet-stream
Content Length: 509804544 < see, the size is 
different!
 Last Modified: Mon, 19 Jan 2015 15:52:18 GMT
  ETag: "6d0612f82db9a531b34d25823a45073d"
  Manifest: glance/fc34ec49-4bd3-40dd-918f-44d3254f2ac9-
 Accept-Ranges: bytes
   X-Timestamp: 1421682737.01148
X-Trans-Id: tx01a19f7476a541808c9a1-0054bd28e1



glance image-download tst --file out.qcow2
[Errno 32] Corrupt image download. Checksum was 
0eeddae1007f01b0029136d28518f538 expected 3ecddfe0787a392960d230c87a421c6a

** Affects: glance
 Importance: Undecided
 Assignee: Alexander Tivelkov (ativelkov)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Alexander Tivelkov (ativelkov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412802

Title:
  copy-from broken for large files and swift

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Glance may loose some image data while transferring it to the backing
  store thus corrupting the image when ALL the following conditions are
  met:

  - Image is being created by copying data from remote source (--copy-from CLI 
parameter or appropriate API call)
  - Backing store is Swift
  - Image size is larger then configured "swift_store_large_object_size"

  In such scenarios the last chunk stored in Swift will have the size
  significantly less then expected. An attempt to download the image
  will result in a checksum verification error, however the checksum
  stored in Glance (with image metadata) is correct, and so is the size.

  This is easily reproducible even on devstack (if the devstack is
  configured to run Swift as Glance backend). Just decrease
  'swift_store_large_object_size' to some reasonably low value (i.e. 200
  Mb) and try to copy-from any image which is larger then that value.
  After the upload is successful, check the object size in swift (by
  either summing the sizes of all the chunks or by looking to the size
  of virtual large object) - they will be lower then expected:

  
  glance image-create --name tst --disk-format qcow2 --container-format bare 
--copy-from http://192.168.56.1:8000/F18-x86_64-cfntools.qcow2

  ...

  glance image-list
  
+--+-+-+--+---++
  | ID   | Name| 
Disk Format | Container Format | Size  | Status |
  
+--+-+-+--+---++
  | fc34ec49-4bd3-40dd-918f-44d3254f2ac9 | tst |

[Yahoo-eng-team] [Bug 1412798] [NEW] Typo in section header in config silently disables all config parsing

2015-01-20 Thread George Shuklin
Public bug reported:

I know it sounds silly, but I just spend five hours trying to find why
glance is not working with swift and printing random erros. At the end I
had found it had ignored all debug/log settings, and later I had found
the source of the problem - small typo in my config.

If config contains '[[DEFAULT]' instead of '[DEFAULT]' glance ignores
all setting in section (i think it is not only for 'default', but
'default' is the most devastating, because it disables logging and
logging locations).

Proposed solution: write down a warning to stdout if configuration file
contains no '[DEFAULT]' section.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412798

Title:
  Typo in section header in config silently disables all config parsing

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I know it sounds silly, but I just spend five hours trying to find why
  glance is not working with swift and printing random erros. At the end
  I had found it had ignored all debug/log settings, and later I had
  found the source of the problem - small typo in my config.

  If config contains '[[DEFAULT]' instead of '[DEFAULT]' glance ignores
  all setting in section (i think it is not only for 'default', but
  'default' is the most devastating, because it disables logging and
  logging locations).

  Proposed solution: write down a warning to stdout if configuration
  file contains no '[DEFAULT]' section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1412798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412770] [NEW] Enable functional test job in openstack ci for neutron-vpnaas

2015-01-20 Thread Numan Siddique
Public bug reported:

Enable functional test job in openstack ci for neutron-vpnaas

** Affects: openstack-ci
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: New

** Project changed: neutron => openstack-ci

** Changed in: openstack-ci
 Assignee: (unassigned) => Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412770

Title:
  Enable functional test job in openstack ci for neutron-vpnaas

Status in OpenStack Core Infrastructure:
  New

Bug description:
  Enable functional test job in openstack ci for neutron-vpnaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-ci/+bug/1412770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412769] [NEW] Flavor ID changed from number to UUID4 when edited

2015-01-20 Thread Kyrylo Romanenko
Public bug reported:

Flavor ID changes from numeric format to UUID4 format after doing some
changes into flavor.

Steps to reproduce:

1) Login to Horizon as admin.

2) Navigate to Admin->System->Flavors.

3) See list of Flavors.
Note that every Flavor has column with numerical  property "ID".

4) Click "Edit Flavor" button for any Flavor.

5) Do some change, for example in field Name of flavor.  Then click
"Save".

Actual: value in field ID changed to UUID4 format.  See screenshot.

Expected: Flavor ID not intended to be changed.


Environment:
Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"

Browsers:
Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
Firefox: 35.0 on Ubuntu 14.04

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ux

** Attachment added: "flavors.png"
   
https://bugs.launchpad.net/bugs/1412769/+attachment/4302284/+files/flavors.png

** Description changed:

  Flavor ID changes from numeric format to UUID4 format after doing some
  changes into flavor.
  
  Steps to reproduce:
  
  1) Login to Horizon as admin.
  
  2) Navigate to Admin->System->Flavors.
  
- 3) See list of Flavors. 
+ 3) See list of Flavors.
  Note that every Flavor has column with numerical  property "ID".
  
  4) Click "Edit Flavor" button for any Flavor.
  
  5) Do some change, for example in field Name of flavor.  Then click
  "Save".
  
  Actual: value in field ID changed to UUID4 format.  See screenshot.
  
  Expected: Flavor ID not intended to be changed.
+ 
+ 
+ Environment:
+ Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"
+ 
+ Browsers:
+ Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
+ Firefox: 35.0 on Ubuntu 14.04

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412769

Title:
  Flavor ID changed from number to UUID4 when edited

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Flavor ID changes from numeric format to UUID4 format after doing some
  changes into flavor.

  Steps to reproduce:

  1) Login to Horizon as admin.

  2) Navigate to Admin->System->Flavors.

  3) See list of Flavors.
  Note that every Flavor has column with numerical  property "ID".

  4) Click "Edit Flavor" button for any Flavor.

  5) Do some change, for example in field Name of flavor.  Then click
  "Save".

  Actual: value in field ID changed to UUID4 format.  See screenshot.

  Expected: Flavor ID not intended to be changed.

  
  Environment:
  Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"

  Browsers:
  Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
  Firefox: 35.0 on Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412750] [NEW] Horizon does not detect image type in Admin menu

2015-01-20 Thread Oleksiy Butenko
Public bug reported:

When you trying to create image in Admin menu Horizon does not detects
image type automatically (such as in Project->Images)

Steps:
1.  Login to Horizon
2.  Go to Admin->Images
3.  Click "Create Image"
4.  Specify "Name"
5.  Specify "Image Location" 
[http://archive.ubuntu.com/ubuntu/dists/utopic/main/installer-i386/current/images/netboot/mini.iso]

Expected result: Horizon detect image type correctly.

Actual result:  Horizon does not detect image type, we need to specify
it manually.

MOS  release: "6.0"
build_number: "58"
build_id: "2014-12-26_14-25-46"


Browsers:
Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
Firefox: 35.0 on Ubuntu 14.04

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon-core images

** Attachment added: "create_image.png"
   
https://bugs.launchpad.net/bugs/1412750/+attachment/4302265/+files/create_image.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412750

Title:
  Horizon does not detect image type in Admin menu

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you trying to create image in Admin menu Horizon does not detects
  image type automatically (such as in Project->Images)

  Steps:
  1.  Login to Horizon
  2.  Go to Admin->Images
  3.  Click "Create Image"
  4.  Specify "Name"
  5.  Specify "Image Location" 
[http://archive.ubuntu.com/ubuntu/dists/utopic/main/installer-i386/current/images/netboot/mini.iso]

  Expected result: Horizon detect image type correctly.

  Actual result:  Horizon does not detect image type, we need to specify
  it manually.

  MOS  release: "6.0"
  build_number: "58"
  build_id: "2014-12-26_14-25-46"

  
  Browsers:
  Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
  Firefox: 35.0 on Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412749] [NEW] Horizon UI with JavaScript disabled

2015-01-20 Thread Kyrylo Romanenko
Public bug reported:

Steps to reproduce:

1) Disable JavaScript in browser settings.
2) Navigate to Horizon.
3) Login to Horizon Dashboard.

Actual: Horizon Dashboard looks usually, but performs malfunction of
most Horizon capabilities.

Expected: to see some warning to enable JavaScript.


Environment:
{"build_id": "2014-12-26_14-25-46", "ostf_sha": 
"a9afb68710d809570460c29d6c3293219d3624d4", "build_number": "58", 
"auth_required": true, "api": "1.0", "nailgun_sha": 
"5f91157daa6798ff522ca9f6d34e7e135f150a90", "production": "docker", 
"fuelmain_sha": "81d38d6f2903b5a8b4bee79ca45a54b76c1361b8", "astute_sha": 
"16b252d93be6aaa73030b8100cf8c5ca6a970a91", "feature_groups": ["mirantis"], 
"release": "6.0", "release_versions": {"2014.2-6.0": {"VERSION": {"build_id": 
"2014-12-26_14-25-46", "ostf_sha": "a9afb68710d809570460c29d6c3293219d3624d4", 
"build_number": "58", "api": "1.0", "nailgun_sha": 
"5f91157daa6798ff522ca9f6d34e7e135f150a90", "production": "docker", 
"fuelmain_sha": "81d38d6f2903b5a8b4bee79ca45a54b76c1361b8", "astute_sha": 
"16b252d93be6aaa73030b8100cf8c5ca6a970a91", "feature_groups": ["mirantis"], 
"release": "6.0", "fuellib_sha": "fde8ba5e11a1acaf819d402c645c731af450aff0"}}}, 
"fuellib_sha": "fde8ba5e11a1acaf819d402c645c731af450aff0"}

Browsers:
Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
Firefox: 35.0 on Ubuntu 14.04

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412749

Title:
  Horizon UI with JavaScript disabled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1) Disable JavaScript in browser settings.
  2) Navigate to Horizon.
  3) Login to Horizon Dashboard.

  Actual: Horizon Dashboard looks usually, but performs malfunction of
  most Horizon capabilities.

  Expected: to see some warning to enable JavaScript.


  Environment:
  {"build_id": "2014-12-26_14-25-46", "ostf_sha": 
"a9afb68710d809570460c29d6c3293219d3624d4", "build_number": "58", 
"auth_required": true, "api": "1.0", "nailgun_sha": 
"5f91157daa6798ff522ca9f6d34e7e135f150a90", "production": "docker", 
"fuelmain_sha": "81d38d6f2903b5a8b4bee79ca45a54b76c1361b8", "astute_sha": 
"16b252d93be6aaa73030b8100cf8c5ca6a970a91", "feature_groups": ["mirantis"], 
"release": "6.0", "release_versions": {"2014.2-6.0": {"VERSION": {"build_id": 
"2014-12-26_14-25-46", "ostf_sha": "a9afb68710d809570460c29d6c3293219d3624d4", 
"build_number": "58", "api": "1.0", "nailgun_sha": 
"5f91157daa6798ff522ca9f6d34e7e135f150a90", "production": "docker", 
"fuelmain_sha": "81d38d6f2903b5a8b4bee79ca45a54b76c1361b8", "astute_sha": 
"16b252d93be6aaa73030b8100cf8c5ca6a970a91", "feature_groups": ["mirantis"], 
"release": "6.0", "fuellib_sha": "fde8ba5e11a1acaf819d402c645c731af450aff0"}}}, 
"fuellib_sha": "fde8ba5e11a1acaf819d402c645c731af450aff0"}

  Browsers:
  Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
  Firefox: 35.0 on Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412742] [NEW] tests randomly hang in oslo.messaging fake driver

2015-01-20 Thread Daniel Berrange
Public bug reported:

The following change has been getting random failures in the py27 test
job where it would time out after 50 minutes.

https://review.openstack.org/#/c/140725/

eg this test run:

http://logs.openstack.org/25/140725/11/gate/gate-nova-python27/0dabb50/

Strangely other stuff going into the gate was not affected, and there's
no obvious sign of what in this change would cause such a hang probem.

Running all the changed tests in isolation never showed a hang. Only
when the entire nova test suite was run would it hang, so it seems like
some kind of race condition is hiding there. Probably the fact that this
change deletes some tests in one file and adds tests in another file
makes the race more likely to occurr.

Reproducing the hang locally and then attaching to the procss with GDB
shows the following trace that ends the in oslo.messaging fake driver

(gdb) py-bt
#3 Frame 0x8b71dd0, for file 
/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_fake.py,
 line 149, in get_exchange 
(self=, _exchanges={'nova': 
, 
_RLock__count=0) at remote 0x7667a10>, name='nova') at remote 0x7667910>}, 
_default_exchange='nova') at remote 0x954c850>, name='nova')
return self._exchanges.setdefault(name, FakeExchange(name))
#6 Frame 0xf2281d0, for file 
/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_fake.py,
 line 63, in poll (self=, ], 
driver=, _exchanges={'nova': 
, 
_RLock__count=0) at remote 0x7667a10>, name='nova') at remote 0x7667910>}, 
_default_exchange='nova') at remote 0x954c850>, _ur
 l=, 
this_exc_message=u'', cur_time=1421749084)
return infunc(*args, **kwargs)
#20 Frame 0x102669a0, for file 
/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 214, in main (self=, _exit_event=, 
_waiters=set([]), _exc=None) at remote 0x7667150>, _resolving_links=False) at 
remote 0xe1d6eb0>, function=, args=(), kwargs={})
result = function(*args, **kwargs)
#31 Frame 0x84deca0, for file 
/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/eventlet/hubs/timer.py,
 line 58, in __call__ (self=, 
tpl=(, (), 
{}), called=True) at remote 0xcedeb90>, args=(...), cb=, kw={...})
cb(*args, **kw)
#41 Frame 0xf1e5ff0, for file 
/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 457, in fire_timers (self=, debug_exceptions=True, debug_blocking_resolution=1, modify=, running=True, 
debug_blocking=False, listeners={'read': {11: , 
spent=False, greenlet=, _exit_event=, 
_waiters=set([]), _exc=None) at remote 0x10938950>, _resolving_links=False) at 
remote 0xe1d6c30>, evtype='read', mark_as_closed=, tb=) at remote 0x10938cd0>, 13: , spent=
 False, greenlet=, 
debug_exceptions=True, debug_blocking_resolution=1, modify=, running=True, 
debug_blocking=False, listeners={'read': {11: , 
spent=False, greenlet=, _exit_event=, 
_waiters=set([]), _exc=None) at remote 0x10938950>, _resolving_links=False) at 
remote 0xe1d6c30>, evtype='read', mark_as_closed=, tb=) at remote 0x10938cd0>, 13: , spent=False, g
 reenlet=
sys.exit(main())
  File 
"/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/tempest_lib/cmd/subunit_trace.py",
 line 257, in main
stream.run(result)
  File 
"/home/berrange/src/cloud/nova/.tox/py27/lib/python2.7/site-packages/subunit/v2.py",
 line 270, in run
content = self.source.read(1)
KeyboardInterrupt
ERROR: KEYBOARDINTERRUPT
interrupted
ERROR: keyboardinterrupt


Again, running that test case in isolation never fails.


Seeing this recent commit

commit 2809fab96c4b48f2a979965b3f4788dfb1379977
Author: Sean Dague 
Date:   Fri Jan 16 14:41:24 2015 -0500

increase fake rpc POLL_TIMEOUT to 0.1s

The 0.001s POLL_TIMEOUT appeared related to an intermitent race
issue. The polling loop in the fake driver in oslo.messaging is 0.05s,
so increase to 0.1 total time out to make sure we get at least one
full polling cycle on waits.

Change-Id: I24a83638df0c8f3f84bb528333a91491a98016a3

suggests to me that the problem could be related to the fact that the
nova.tests.unit.virt.libvirt.test_volume.LibvirtVolumeTestCase.test_libvirt_kvm_iser_volume_with_multipath_getmpdev
test stubs out  time.sleep to be a no-op

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412742

Title:
  tests randomly hang in oslo.messaging fake driver

Status in OpenStack Compute (Nova):
  New

Bug description:
  The following change has been getting random failures in the py27 test
  job where it would time out after 50 minutes.

  https://review.openstack.org/#/c/140725/

  eg this test run:

  http://logs.openstack.org/25/140725/11/gate/gate-nova-
  python27/0dabb50/

  Strangely other stuff going 

[Yahoo-eng-team] [Bug 1412738] [NEW] dnsmasq version check should set C locale

2015-01-20 Thread Philipp Marek
Public bug reported:

>From my logfiles (shortened, redacted):

2015-01-20 11:22:36.853 DEBUG neutron.agent.linux.utils [] Running command:
  ['dnsmasq', '--version'] from (pid=3481) create_process
  /opt/stack/neutron/neutron/agent/linux/utils.py:46
2015-01-20 11:22:37.052 DEBUG neutron.agent.linux.utils [] 
  Command: ['dnsmasq', '--version']
  Exit code: 0  
  Stdout: 'Dnsmasq Version 2.68  Copyright (c) 2000-2013 Simon Kelley\n
   Compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6
 no-Lua TFTP conntrack ipset auth\n\n
   F\xc3\xbcr diese Software wird ABSOLUT KEINE GARANTIE gew\xc3\xa4hrt.\n
   Dnsmasq ist freie Software, und du bist willkommen es weiter zu verteilen\n
   unter den Bedingungen der GNU General Public Lizenz, Version 2 oder 3.\n'
  Stderr: '' from (pid=3481) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:79  
2015-01-20 11:22:37.082 ERROR neutron.agent.linux.dhcp []
  FAILED VERSION REQUIREMENT FOR DNSMASQ.
  Please ensure that its version is 2.67 or above!

I guess that my locale (that was used on ./rejoin-stack) interfered with the
version detection.


My devstack has neutron at

commit 2c94b10d741f3a9d96b548cc051850467468c8de
Merge: b3ae161 c7e533c
Author: Jenkins 
Date:   Wed Jan 14 00:38:28 2015 +

Merge "Allow IptablesManager to manage mangle table"

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412738

Title:
  dnsmasq version check should set C locale

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  From my logfiles (shortened, redacted):

  2015-01-20 11:22:36.853 DEBUG neutron.agent.linux.utils [] Running command:
['dnsmasq', '--version'] from (pid=3481) create_process
/opt/stack/neutron/neutron/agent/linux/utils.py:46
  2015-01-20 11:22:37.052 DEBUG neutron.agent.linux.utils [] 
Command: ['dnsmasq', '--version']
Exit code: 0  
Stdout: 'Dnsmasq Version 2.68  Copyright (c) 2000-2013 Simon Kelley\n
 Compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6
   no-Lua TFTP conntrack ipset auth\n\n
 F\xc3\xbcr diese Software wird ABSOLUT KEINE GARANTIE gew\xc3\xa4hrt.\n
 Dnsmasq ist freie Software, und du bist willkommen es weiter zu verteilen\n
 unter den Bedingungen der GNU General Public Lizenz, Version 2 oder 3.\n'
Stderr: '' from (pid=3481) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:79  
  2015-01-20 11:22:37.082 ERROR neutron.agent.linux.dhcp []
FAILED VERSION REQUIREMENT FOR DNSMASQ.
Please ensure that its version is 2.67 or above!

  I guess that my locale (that was used on ./rejoin-stack) interfered with the
  version detection.

  
  My devstack has neutron at

  commit 2c94b10d741f3a9d96b548cc051850467468c8de
  Merge: b3ae161 c7e533c
  Author: Jenkins 
  Date:   Wed Jan 14 00:38:28 2015 +

  Merge "Allow IptablesManager to manage mangle table"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409733] Re: adopt namespace-less oslo imports

2015-01-20 Thread Numan Siddique
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409733

Title:
  adopt namespace-less oslo imports

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Neutron:
  In Progress

Bug description:
  Oslo is migrating from oslo.* namespace to separate oslo_* namespaces
  for each library: https://blueprints.launchpad.net/oslo-
  incubator/+spec/drop-namespace-packages

  We need to adopt to the new paths in neutron. Specifically, for
  oslo.config, oslo.middleware, oslo.i18n, oslo.serialization,
  oslo.utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1409733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412727] [NEW] LOG.warn(msg, xxx) can't be found by pep8 check

2015-01-20 Thread jichenjc
Public bug reported:

nova had check for LOG.warn becaues of p3
compatible issue , but it didn't include the check for
the LOG.warn(msg, xxx) format

so the following violation can't be found
 msg = _('abc')
 LOG.warn(msg, xxx)

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412727

Title:
  LOG.warn(msg, xxx) can't be found by pep8 check

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  nova had check for LOG.warn becaues of p3
  compatible issue , but it didn't include the check for
  the LOG.warn(msg, xxx) format

  so the following violation can't be found
   msg = _('abc')
   LOG.warn(msg, xxx)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409733] Re: adopt namespace-less oslo imports

2015-01-20 Thread venkata anil
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409733

Title:
  adopt namespace-less oslo imports

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Neutron:
  In Progress

Bug description:
  Oslo is migrating from oslo.* namespace to separate oslo_* namespaces
  for each library: https://blueprints.launchpad.net/oslo-
  incubator/+spec/drop-namespace-packages

  We need to adopt to the new paths in neutron. Specifically, for
  oslo.config, oslo.middleware, oslo.i18n, oslo.serialization,
  oslo.utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1409733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409733] Re: adopt namespace-less oslo imports

2015-01-20 Thread Masco Kaliyamoorthy
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409733

Title:
  adopt namespace-less oslo imports

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Python client library for Neutron:
  In Progress

Bug description:
  Oslo is migrating from oslo.* namespace to separate oslo_* namespaces
  for each library: https://blueprints.launchpad.net/oslo-
  incubator/+spec/drop-namespace-packages

  We need to adopt to the new paths in neutron. Specifically, for
  oslo.config, oslo.middleware, oslo.i18n, oslo.serialization,
  oslo.utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1409733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210483] Re: ServerAddressesTestXML.test_list_server_addresses FAIL

2015-01-20 Thread Ken'ichi Ohmichi
XML API tests are removed already, so this bug doesn't happen now.

** Changed in: nova
   Status: Incomplete => Fix Committed

** Changed in: nova
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210483

Title:
  ServerAddressesTestXML.test_list_server_addresses FAIL

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2013-08-09 09:05:28.205 | 
==
  2013-08-09 09:05:28.220 | FAIL: 
tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML.test_list_server_addresses[gate,smoke]
  2013-08-09 09:05:28.231 | 
--
  2013-08-09 09:05:28.232 | _StringException: Traceback (most recent call last):
  2013-08-09 09:05:28.232 |   File 
"/opt/stack/new/tempest/tempest/api/compute/servers/test_server_addresses.py", 
line 56, in test_list_server_addresses
  2013-08-09 09:05:28.232 | self.assertTrue(len(addresses) >= 1)
  2013-08-09 09:05:28.233 |   File "/usr/lib/python2.7/unittest/case.py", line 
420, in assertTrue
  2013-08-09 09:05:28.233 | raise self.failureException(msg)
  2013-08-09 09:05:28.233 | AssertionError: False is not true

  http://logs.openstack.org/80/38980/2/check/gate-tempest-devstack-vm-
  neutron/1e116fa/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp