[Yahoo-eng-team] [Bug 1421528] [NEW] create a network,more greater more longer time it takes

2015-02-12 Thread kaka
Public bug reported:

released version:
  Icehouse nova-network

description:
  when i create a network(cidr is 10.0.0.0/16) it will raise "MessagingTimeout" 
error

reason:
  create a network nova-conductor will insert all address into database(table 
"fixed_ips" ),it will take a longer time when network is very very greater.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421528

Title:
  create a network,more greater more longer time it takes

Status in OpenStack Compute (Nova):
  New

Bug description:
  released version:
Icehouse nova-network

  description:
when i create a network(cidr is 10.0.0.0/16) it will raise 
"MessagingTimeout" error

  reason:
create a network nova-conductor will insert all address into database(table 
"fixed_ips" ),it will take a longer time when network is very very greater.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399525] Re: Juno:port update with no security-group makes tenant VM's not accessible.

2015-02-12 Thread venkata anil
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399525

Title:
  Juno:port update with no security-group makes tenant VM's not
  accessible.

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Setup:
  +
  Ubuntu 14.04

  Steps to reproduce:
  

  teps to reproduce:

  1. create working juno setup(single node dev-stack)ubuntu(14.04 server).
  2. create custom security-group test with icmp ingress allowed.
  3. create network with subnet to spawn tenant VM.
  4. spawn a tenant vm with created security-group and network.
  5. Ensure Vm able to ping from dhcp namespace.
  5. Create floatingip and associate to the VM port.
  6. Try to ping the VM from public network(i.e. floating subnet) <== VM able 
to ping since ufw disabled and icmp rule associated to the port.
  7. Update the VM port with no-security-groups and then try to ping VM's 
floatingip.
  8. VM ip not pinging, but it should ping because VM port unplugged from the 
ovs-firewall driver and it falls under system iptabel

  expected: it should ping because the compute ufw disabled.

  Reference:
  +
  port_id:bd89a24b-eeaf-41f6-a97b-54d65263052d
  VM_id:392b62a1-dd75-4d23-9296-978ef4630caf
  Sec_group:d6c08ecf-eb66-410d-a763-75f9a707fd89

  IP-TABLE:
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321207] Re: test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic fails

2015-02-12 Thread Armando Migliaccio
https://review.openstack.org/#/c/155626/

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
   Importance: High => Low

** Changed in: tempest
   Status: New => In Progress

** Changed in: tempest
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1321207

Title:
  test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic fails

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  In Progress

Bug description:
  tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic
  fails with "NetworkInUseClient: Unable to complete operation on
  network 1a3a6a7c-d3a6-4790-b04a-e57b20c927ba. There are one or more
  ports still in use on the network."

  The test failed once as follows:

  traceback-1: {{{
  Traceback (most recent call last):
    File "tempest/scenario/test_network_basic_ops.py", line 98, in 
cleanup_wrapper
  self.cleanup_resource(resource, self.__class__.__name__)
    File "tempest/scenario/manager.py", line 114, in cleanup_resource
  resource.delete()
    File "tempest/api/network/common.py", line 55, in delete
  self.client.delete_network(self.id)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 101, in with_params
  ret = self.function(instance, *args, **kwargs)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 355, in delete_network
  return self.delete(self.network_path % (network))
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1311, in delete
  headers=headers, params=params)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1300, in retry_request
  headers=headers, params=params)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1243, in do_request
  self._handle_fault_response(status_code, replybody)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1211, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 68, in exception_handler_v20
  status_code=status_code)
  NetworkInUseClient: Unable to complete operation on network 
1a3a6a7c-d3a6-4790-b04a-e57b20c927ba. There are one or more ports still in use 
on the network.
  }}}

  Traceback (most recent call last):
    File "tempest/scenario/test_network_basic_ops.py", line 98, in 
cleanup_wrapper
  self.cleanup_resource(resource, self.__class__.__name__)
    File "tempest/scenario/manager.py", line 114, in cleanup_resource
  resource.delete()
    File "tempest/api/network/common.py", line 79, in delete
  self.client.delete_subnet(self.id)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 101, in with_params
  ret = self.function(instance, *args, **kwargs)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 381, in delete_subnet
  return self.delete(self.subnet_path % (subnet))
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1311, in delete
  headers=headers, params=params)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1300, in retry_request
  headers=headers, params=params)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1243, in do_request
  self._handle_fault_response(status_code, replybody)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1211, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
    File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 68, in exception_handler_v20
  status_code=status_code)
  Conflict: Unable to complete operation on subnet 
82c00781-ed44-4c2d-85b9-0ce8c9a49c08. One or more ports have an IP allocation 
from this subnet.

  In screen-q-agt.txt.gz, you can find a message that seems related to
  the failure:

  2014-05-19 09:06:18.065 10379 WARNING neutron.agent.linux.ovs_lib [-]
  Found not yet ready openvswitch port: [u'tap81bfbadf-8a', [u'map', [[u
  'attached-mac', u'fa:16:3e:1a:02:d1'], [u'iface-id', u'81bfbadf-
  8ab5-4bd6-94dd-f855d8c9813f'], [u'iface-status', u'active']]],
  [u'set', []]]

  Other similar messages are spread throughout the log file, including
  the following error:

  2014-05-19 08:51:32.439 10379 ERROR neutron.agent.linux.ovs_lib [-]
  Interface tap182c3e81-02 not found.

  All the logs can be found at: 
http://logs.openstack.org/24/90724/4/check/check-tempest-dsvm-neutron/0efa4aa/logs/
  The patch that failed is: htt

[Yahoo-eng-team] [Bug 1421497] [NEW] Gateway clear generates a TRACE - AttributeError in get_int_device_name in DVR routers

2015-02-12 Thread Swaminathan Vasudevan
Public bug reported:

A recent change in the agent code have introduced this problem.

When a Gateway is cleared from the router, even though there are no existing 
floating IPs, the "external_gateway_removed" function in "agent.py" is calling 
"process_floatingips". 
That may be the reason for this failure.


Stderr: RTNETLINK answers: No such process

2015-02-11 23:12:15.307 2809 ERROR neutron.agent.l3.dvr [-] DVR: removed snat 
failed
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr Traceback (most recent 
call last):
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr   File 
"/opt/stack/new/neutron/neutron/agent/l3/dvr.py", line 197, in 
_snat_redirect_remove
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr 
ns_ipd.route.delete_gateway(table=snat_idx)
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 415, in 
delete_gateway
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr self._as_root(*args)
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 253, in _as_root
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr 
kwargs.get('use_root_namespace', False))
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 83, in _as_root
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr 
log_fail_as_error=self.log_fail_as_error)
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 95, in _execute
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr 
log_fail_as_error=log_fail_as_error)
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 83, in execute
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr raise 
RuntimeError(m)
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr RuntimeError: 
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-1cfe7654-a669-4f73-a21d-d5110d7c0297', 'ip', 'route', 'del', 
'default', 'dev', 'qr-467e8832-93', 'table', '547711270']
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr Exit code: 2
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr Stdout: 
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr Stderr: RTNETLINK 
answers: No such process
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr 
2015-02-11 23:12:15.307 2809 TRACE neutron.agent.l3.dvr 
2015-02-11 23:12:18.846 2809 ERROR neutron.agent.l3.agent [-] 'NoneType' object 
has no attribute 'get_int_device_name'
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 342, in call
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 602, in process_router
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent 
self._process_external(ri)
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 565, in 
_process_external
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent 
self._process_external_gateway(ri)
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 503, in 
_process_external_gateway
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent 
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 905, in 
external_gateway_removed
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent ri, ex_gw_port)
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 694, in 
_get_external_device_interface_name
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent fip_int = 
ri.fip_ns.get_int_device_name(ri.router_id)
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent AttributeError: 
'NoneType' object has no attribute 'get_int_device_name'
2015-02-11 23:12:18.846 2809 TRACE neutron.agent.l3.agent 
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 82, 
in _spawn_n_impl
func(*args, **kwargs)
  File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 1137, in 
_process_router_update
self._router_removed(update.id)
  File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 409, in 
_router_removed
self.process_

[Yahoo-eng-team] [Bug 1385721] Re: v4-fixed-ip= not working with nova networking

2015-02-12 Thread yanhe...@gmail.com
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385721

Title:
  v4-fixed-ip= not working with nova networking

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In Icehouse the following worked:

  nova boot --flavor 4 --boot-volume 13cf15c8-e5fa-484f-
  b8b5-54e1498dfb48 spacewalk --nic net-
  id=a0e8f4f0-c1c4-483d-9524-300fcede7a69,v4-fixed-ip=10.71.0.206

  However with Juno the only way to make it work is to remove the v4
  -fixed-ip= setting. The Debug logs are as follows:

  2014-10-24 20:33:10.721 2899 DEBUG keystoneclient.session [-] REQ: curl -i -X 
GET http://127.0.0.1:35357/v2.0/tokens/revoked -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
TOKEN_REDACTED" _http_log_request 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
  2014-10-24 20:33:10.743 2899 DEBUG keystoneclient.session [-] RESP: [200] 
{'date': 'Sat, 25 Oct 2014 00:33:10 GMT', 'content-type': 'application/json', 
'content-length': '686', 'vary': 'X-Auth-Token'} 
  RESP BODY: {"signed": "-BEGIN 
CMS-\nMIIBxgYJKoZIhvcNAQcCoIIBtzCCAbMCAQExCTAHBgUrDgMCGjAeBgkqhkiG9w0B\nBwGgEQQPeyJyZXZva2VkIjogW119MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJV\nUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNl\ndDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3\nDQEBAQUABIIBANPLKniK+n+mxd4tIAKrm0rj5u/wQkdlxlToJRhwogKwv1+Tujp/\nFrSjoZSu+tzVsLrHQGVwKdo9DJSN3gTRzQx+TqgIxpduji1gG3uop/VCqSEimtHq\nmmz9hewQGS/lE51xkMwsiWoUmcPruVF2bTfcjAeYsvSOoqLD2jAnnu4jtG68LaWn\n21ew62qzIumwYxfb9BlpvVebShFpKrM4/XWBg7k2KUJ7E+wd6lgo39Sr7FfAxnNv\npvLgfKb0SBXCJYfKrG52lZOkodGcHwNOT9tizm/tHKIVXv/0MN0dLUZY1+NCGkxx\nXETUgJdPHMLfwP/ipVkvih57C1PzD0OZJNI=\n-END
 CMS-\n"}
   _http_log_response 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
  2014-10-24 20:33:10.764 2899 DEBUG nova.api.openstack.wsgi 
[req-c52d68de-62de-4162-806d-33838f5a7c18 None] Calling method '>' _process_stack 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:935
  2014-10-24 20:33:10.777 2899 INFO nova.osapi_compute.wsgi.server 
[req-c52d68de-62de-4162-806d-33838f5a7c18 None] 10.71.0.137 "GET 
/v2/71e48f8b2afb4db99f588752b0c720c5/flavors/4 HTTP/1.1" status: 200 len: 596 
time: 0.0571671
  2014-10-24 20:33:10.781 2901 DEBUG keystoneclient.session [-] REQ: curl -i -X 
GET http://127.0.0.1:35357/v2.0/tokens/revoked -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
TOKEN_REDACTED" _http_log_request 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
  2014-10-24 20:33:10.804 2901 DEBUG keystoneclient.session [-] RESP: [200] 
{'date': 'Sat, 25 Oct 2014 00:33:10 GMT', 'content-type': 'application/json', 
'content-length': '686', 'vary': 'X-Auth-Token'} 
  RESP BODY: {"signed": "-BEGIN 
CMS-\nMIIBxgYJKoZIhvcNAQcCoIIBtzCCAbMCAQExCTAHBgUrDgMCGjAeBgkqhkiG9w0B\nBwGgEQQPeyJyZXZva2VkIjogW119MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJV\nUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNl\ndDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3\nDQEBAQUABIIBANPLKniK+n+mxd4tIAKrm0rj5u/wQkdlxlToJRhwogKwv1+Tujp/\nFrSjoZSu+tzVsLrHQGVwKdo9DJSN3gTRzQx+TqgIxpduji1gG3uop/VCqSEimtHq\nmmz9hewQGS/lE51xkMwsiWoUmcPruVF2bTfcjAeYsvSOoqLD2jAnnu4jtG68LaWn\n21ew62qzIumwYxfb9BlpvVebShFpKrM4/XWBg7k2KUJ7E+wd6lgo39Sr7FfAxnNv\npvLgfKb0SBXCJYfKrG52lZOkodGcHwNOT9tizm/tHKIVXv/0MN0dLUZY1+NCGkxx\nXETUgJdPHMLfwP/ipVkvih57C1PzD0OZJNI=\n-END
 CMS-\n"}
   _http_log_response 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
  2014-10-24 20:33:10.828 2901 DEBUG nova.api.openstack.wsgi 
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Action: 'create', calling 
method: >, body: 
{"server": {"name": "spacewalk", "imageRef": "", "block_device_mapping_v2": 
[{"source_type": "volume", "delete_on_termination": false, "boot_index": 0, 
"uuid": "13cf15c8-e5fa-484f-b8b5-54e1498dfb48", "destination_type": "volume"}], 
"flavorRef": "4", "max_count": 1, "min_count": 1, "networks": [{"fixed_ip": 
"10.71.0.206", "uuid": "a0e8f4f0-c1c4-483d-9524-300fcede7a69"}]}} 
_process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:932
  2014-10-24 20:33:10.841 2901 DEBUG nova.volume.cinder 
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Cinderclient connection created 
using URL: http://10.71.0.137:8776/v1/71e48f8b2afb4db99f588752b0c720c5 
get_cinder_client_version 
/usr/lib/python2.7/site-packages/nova/volume/cinder.py:255
  2014-10-24 20:33:11.049 2901 ERROR nova.api.openstack.wsgi 
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Exception handling resource: 
'NoneType' object has no attribute '__getitem__'
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 
400, in _object_dispatch
  return getattr

[Yahoo-eng-team] [Bug 1421049] Re: Remove dvr router interface consume much time

2015-02-12 Thread shihanzhang
hi Ed Bak, thanks for your reminding!

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421049

Title:
  Remove dvr router interface consume much time

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In my environment, I create a DVR router with only one subnet(cidr is 
10.0.0.0/8) attached to this router, then I create 1 ports in this subnet, 
when I use 'router-interface-delete' to remove this subnet from router, it 
consume much time to return, I analyse the reason is bellow:
  1. when 'remove_router_interface', it will notify l3 agent  
'routers_updated', 
  2. in this _notification, it will schedule_routers this router
  def _notification(self, context, method, router_ids, operation,
shuffle_agents):
  """Notify all the agents that are hosting the routers."""
  plugin = manager.NeutronManager.get_service_plugins().get(
  service_constants.L3_ROUTER_NAT)
  if not plugin:
  LOG.error(_LE('No plugin for L3 routing registered. Cannot notify 
'
'agents with the message %s'), method)
  return
  if utils.is_extension_supported(
  plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
  adminContext = (context.is_admin and
  context or context.elevated())
  plugin.schedule_routers(adminContext, router_ids)
  self._agent_notification(
  context, method, router_ids, operation, shuffle_agents)
  3. in _schedule_router it will get the candidates l3 agent, but in 
'get_l3_agent_candidates' it will check 'check_ports_exist_on_l3agent'

  if agent_mode in ('legacy', 'dvr_snat') and (
  not is_router_distributed):
  candidates.append(l3_agent)
  elif is_router_distributed and agent_mode.startswith('dvr') and (
  self.check_ports_exist_on_l3agent(
  context, l3_agent, sync_router['id'])):
  candidates.append(l3_agent)

  4. but for 'remove_router_interface', it has deleted the router interface 
before do schedule, so the 'get_subnet_ids_on_router' will 
  return a empty list, then use this list as filter to get ports, if port 
number are very large, it will consume much time

  def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
  """
  This function checks for existence of dvr serviceable
  ports on the host, running the input l3agent.
  """
  subnet_ids = self.get_subnet_ids_on_router(context, router_id)

  core_plugin = manager.NeutronManager.get_plugin()
  filter = {'fixed_ips': {'subnet_id': subnet_ids}}
  ports = core_plugin.get_ports(context, filters=filter)

  so I think when 'remove_router_interface', it should not reschedule
  router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421471] [NEW] os-simple-tenant-usage performs poorly with many instances

2015-02-12 Thread Richard Jones
Public bug reported:

The SQL underlying the os-simple-tenant-usage API call results in very
slow operations when the database has many (20,000+) instances. In
testing, the objects.InstanceList.get_active_by_window_joined call in
nova/api/openstack/compute/contrib/simple_tenant_usage.py:SimpleTenantUsageController._tenant_usages_for_period
takes 24 seconds to run.

Some basic timing analysis has shown that the initial query in
nova/db/sqlalchemy/api.py:instance_get_active_by_window_joined runs in
*reasonable* time (though still 5-6 seconds) and the bulk of the time is
spent in the subsequent _instances_fill_metadata call which pulls in
system_metadata info by using a SELECT with an IN clause containing the
20,000 uuids listed, resulting in execution times over 15 seconds.

** Affects: nova
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421471

Title:
  os-simple-tenant-usage performs poorly with many instances

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The SQL underlying the os-simple-tenant-usage API call results in very
  slow operations when the database has many (20,000+) instances. In
  testing, the objects.InstanceList.get_active_by_window_joined call in
  
nova/api/openstack/compute/contrib/simple_tenant_usage.py:SimpleTenantUsageController._tenant_usages_for_period
  takes 24 seconds to run.

  Some basic timing analysis has shown that the initial query in
  nova/db/sqlalchemy/api.py:instance_get_active_by_window_joined runs in
  *reasonable* time (though still 5-6 seconds) and the bulk of the time
  is spent in the subsequent _instances_fill_metadata call which pulls
  in system_metadata info by using a SELECT with an IN clause containing
  the 20,000 uuids listed, resulting in execution times over 15 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421465] [NEW] lbaas v2 does not expose the vip_port_id

2015-02-12 Thread Brandon Logan
Public bug reported:

The response bodies of:

GET /v2.0/lbaas/loadbalancers
GET /v2.0/lbaas/loadbalancers/{loadbalancer_id}

do not show the vip_port_id created for a load balancer.  This should be
shown especially for easy reference to point a floating ip.

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Brandon Logan (brandon-logan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421465

Title:
  lbaas v2 does not expose the vip_port_id

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The response bodies of:

  GET /v2.0/lbaas/loadbalancers
  GET /v2.0/lbaas/loadbalancers/{loadbalancer_id}

  do not show the vip_port_id created for a load balancer.  This should
  be shown especially for easy reference to point a floating ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421453] [NEW] AttributeError on Neutron API job

2015-02-12 Thread Armando Migliaccio
Public bug reported:

The trace:

http://paste.openstack.org/show/172449/

The logstash query:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6ICdtb2R1bGUnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlICdOb3RGb3VuZCdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzc4MzAyODMxMn0=

It seems something sneaked in that broke the non-voting job.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421453

Title:
  AttributeError on Neutron API job

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The trace:

  http://paste.openstack.org/show/172449/

  The logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6ICdtb2R1bGUnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlICdOb3RGb3VuZCdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzc4MzAyODMxMn0=

  It seems something sneaked in that broke the non-voting job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421454] [NEW] AttributeError on Neutron API job

2015-02-12 Thread Armando Migliaccio
Public bug reported:

The trace:

http://paste.openstack.org/show/172449/

The logstash query:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6ICdtb2R1bGUnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlICdOb3RGb3VuZCdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzc4MzAyODMxMn0=

It seems something sneaked in that broke the non-voting job.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421454

Title:
  AttributeError on Neutron API job

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The trace:

  http://paste.openstack.org/show/172449/

  The logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6ICdtb2R1bGUnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlICdOb3RGb3VuZCdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzc4MzAyODMxMn0=

  It seems something sneaked in that broke the non-voting job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384379] Re: versions resource uses host_url which may be incorrect

2015-02-12 Thread Nikhil Manchanda
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
   Importance: Undecided => Medium

** Changed in: trove
 Assignee: (unassigned) => Nikhil Manchanda (slicknik)

** Changed in: trove
Milestone: None => kilo-3

** Changed in: trove
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1384379

Title:
  versions resource uses host_url which may be incorrect

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Openstack Database (Trove):
  Triaged

Bug description:
  The versions resource constructs the links by using host_url, but the
  glance api endpoint may be behind a proxy or ssl terminator. This
  means that host_url may be incorrect. It should have a config option
  to override host_url like the other services do when constructing
  versions links.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1384379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421354] [NEW] test_conntrack_disassociate_fip failed in gate due to timeout

2015-02-12 Thread Ihar Hrachyshka
Public bug reported:

Traceback (most recent call last):
  File "neutron/tests/functional/agent/test_l3_agent.py", line 335, in 
test_conntrack_disassociate_fip
netcat.test_connectivity()
  File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
  File "neutron/tests/sub_base.py", line 111, in assert_max_execution_time
self.fail('Execution of this test timed out')
  File "/usr/local/lib/python2.7/dist-packages/unittest2/case.py", line 666, in 
fail
raise self.failureException(msg)
AssertionError: Execution of this test timed out

It seems 15 secs timeout is too low for gate, we need to raise it.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421354

Title:
  test_conntrack_disassociate_fip failed in gate due to timeout

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_l3_agent.py", line 335, in 
test_conntrack_disassociate_fip
  netcat.test_connectivity()
File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
  self.gen.throw(type, value, traceback)
File "neutron/tests/sub_base.py", line 111, in assert_max_execution_time
  self.fail('Execution of this test timed out')
File "/usr/local/lib/python2.7/dist-packages/unittest2/case.py", line 666, 
in fail
  raise self.failureException(msg)
  AssertionError: Execution of this test timed out

  It seems 15 secs timeout is too low for gate, we need to raise it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421318] [NEW] Admin Instances table actions redirect incorrectly

2015-02-12 Thread Rob Cresswell
Public bug reported:

Some of the table actions in Admin -> Instances redirect to the wrong
destination.

Steps to reproduce:

1. Go to Admin -> Instances
2. Select "Console" or "View Log" from the table row actions dropdown
3. Observe that you are now in Project -> Compute -> Instances

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421318

Title:
  Admin Instances table actions redirect incorrectly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Some of the table actions in Admin -> Instances redirect to the wrong
  destination.

  Steps to reproduce:

  1. Go to Admin -> Instances
  2. Select "Console" or "View Log" from the table row actions dropdown
  3. Observe that you are now in Project -> Compute -> Instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369574] Re: Sahara doesn't support cinder volume type

2015-02-12 Thread Sergey Lukjanov
** Changed in: python-saharaclient
   Status: Fix Committed => Fix Released

** Changed in: python-saharaclient
Milestone: next => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1369574

Title:
  Sahara doesn't support cinder volume type

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in OpenStack Data Processing (Sahara):
  Fix Released

Bug description:
  When creating node group template you can set cinder as storage for clusters, 
but you can not set type of cinder volumes.
  By default uses 'default' cinder backend with 'default' type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1369574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421300] Re: Keystone CRITICAL : Empty Module Name (driver)

2015-02-12 Thread isador999
An empty line 'driver=' at the end of the keystone.conf file  (in the
[trust] section).

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1421300

Title:
  Keystone CRITICAL : Empty Module Name (driver)

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Hi,

  I started to install Openstack Juno today.
  The documentation is for Ubuntu 14.04 (on ubuntu server), and the date of my 
documentation file is 12/02/15. 

  When I try to start Keystone, it stops immediately with the following
  error in '/var/log/keystone/keystone-all.log' :

  
  CRITICAL keystone [-] ValueError: Empty module name
  2015-02-12 17:10:43.026 16609 TRACE keystone Traceback (most recent call 
last):
  2015-02-12 17:10:43.026 16609 TRACE keystone   File "/usr/bin/keystone-all", 
line 147, in 
  2015-02-12 17:10:43.026 16609 TRACE keystone backends.load_backends()
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/backends.py", line 47, in 
load_backends
  2015-02-12 17:10:43.026 16609 TRACE keystone trust_api=trust.Manager(),
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/common/dependency.py", line 110, in 
__wrapped_init__
  2015-02-12 17:10:43.026 16609 TRACE keystone init(self, *args, **kwargs)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/trust/core.py", line 46, in __init__
  2015-02-12 17:10:43.026 16609 TRACE keystone super(Manager, 
self).__init__(CONF.trust.driver)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 70, in 
__init__
  2015-02-12 17:10:43.026 16609 TRACE keystone self.driver = 
importutils.import_object(driver_name)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/importutils.py", 
line 38, in import_object
  2015-02-12 17:10:43.026 16609 TRACE keystone return 
import_class(import_str)(*args, **kwargs)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/importutils.py", 
line 27, in import_class
  2015-02-12 17:10:43.026 16609 TRACE keystone __import__(mod_str)
  2015-02-12 17:10:43.026 16609 TRACE keystone ValueError: Empty module name

  
  I don't know where I can search for this bug. 
  I have the latest version of the keystone and python-keystoneclient client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1421300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421317] [NEW] horizon snapshot create error message is unhelpful

2015-02-12 Thread Eric Peterson
Public bug reported:

We have a user who is close to their quota, and trying to create a
snapshot that says

"Error Unable to create snapshot"

That confuses users. The real error is in the nova log.

OverLimit: VolumeSizeExceedsAvailableQuota: Requested volume or snapshot
exceeds allowed Gigabytes quota. Requested 500G, quota is 1500G and
1100G has been consumed. (HTTP 413) (Request-ID: req-XYZ)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421317

Title:
  horizon snapshot create error message is unhelpful

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We have a user who is close to their quota, and trying to create a
  snapshot that says

  "Error Unable to create snapshot"

  That confuses users. The real error is in the nova log.

  OverLimit: VolumeSizeExceedsAvailableQuota: Requested volume or
  snapshot exceeds allowed Gigabytes quota. Requested 500G, quota is
  1500G and 1100G has been consumed. (HTTP 413) (Request-ID: req-XYZ)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277104] Re: wrong order of assertEquals args

2015-02-12 Thread Sergey Lukjanov
** Changed in: python-saharaclient
Milestone: next => 0.7.5

** Changed in: python-saharaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277104

Title:
  wrong order of assertEquals args

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Nova:
  Fix Committed
Status in OpenStack Command Line Client:
  Fix Released
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in Python client library for Swift:
  In Progress
Status in Rally:
  Confirmed

Bug description:
  Args of assertEquals method in ceilometer.tests are arranged in wrong order. 
In result when test fails it shows incorrect information about observed and 
actual data. It's found more than 2000 times.
  Right order of arguments is "expected, actual".

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421300] Re: Keystone CRITICAL : Empty Module Name (driver)

2015-02-12 Thread Lance Bragstad
This looks like a keystone bug versus a python-keystoneclient bug.

** Also affects: keystone
   Importance: Undecided
   Status: New

** No longer affects: python-keystoneclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1421300

Title:
  Keystone CRITICAL : Empty Module Name (driver)

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi,

  I started to install Openstack Juno today.
  The documentation is for Ubuntu 14.04 (on ubuntu server), and the date of my 
documentation file is 12/02/15. 

  When I try to start Keystone, it stops immediately with the following
  error in '/var/log/keystone/keystone-all.log' :

  
  CRITICAL keystone [-] ValueError: Empty module name
  2015-02-12 17:10:43.026 16609 TRACE keystone Traceback (most recent call 
last):
  2015-02-12 17:10:43.026 16609 TRACE keystone   File "/usr/bin/keystone-all", 
line 147, in 
  2015-02-12 17:10:43.026 16609 TRACE keystone backends.load_backends()
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/backends.py", line 47, in 
load_backends
  2015-02-12 17:10:43.026 16609 TRACE keystone trust_api=trust.Manager(),
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/common/dependency.py", line 110, in 
__wrapped_init__
  2015-02-12 17:10:43.026 16609 TRACE keystone init(self, *args, **kwargs)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/trust/core.py", line 46, in __init__
  2015-02-12 17:10:43.026 16609 TRACE keystone super(Manager, 
self).__init__(CONF.trust.driver)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 70, in 
__init__
  2015-02-12 17:10:43.026 16609 TRACE keystone self.driver = 
importutils.import_object(driver_name)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/importutils.py", 
line 38, in import_object
  2015-02-12 17:10:43.026 16609 TRACE keystone return 
import_class(import_str)(*args, **kwargs)
  2015-02-12 17:10:43.026 16609 TRACE keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/importutils.py", 
line 27, in import_class
  2015-02-12 17:10:43.026 16609 TRACE keystone __import__(mod_str)
  2015-02-12 17:10:43.026 16609 TRACE keystone ValueError: Empty module name

  
  I don't know where I can search for this bug. 
  I have the latest version of the keystone and python-keystoneclient client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1421300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394551] Re: Legacy GroupAffinity and GroupAntiAffinity filters are broken

2015-02-12 Thread Davanum Srinivas (DIMS)
looks like this is fixed

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394551

Title:
  Legacy GroupAffinity and GroupAntiAffinity filters are broken

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Both GroupAffinity and GroupAntiAffinity filters are broken. The
  scheduler does not respect the filters and schedules the servers
  against the policy.

  Reproduction steps:
  0) Spin up a single node devstack 
  1) Add GroupAntiAffinityFilter to  scheduler_default_filters in nova.conf and 
restart the nova services
  2) Boot multiple server with the following command 
  nova boot --image cirros-0.3.2-x86_64-uec --flavor 42 --hint group=foo 
server-1

  Expected behaviour:
  The second and any further boot should fail with NoValidHostFound exception 
as anti-affinity policy cannot be fulfilled.

  Actual behaviour:
  Any number of servers are booted to the same compute node

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390498] Re: host API has inconsistent host name attribute

2015-02-12 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390498

Title:
  host API has inconsistent host name attribute

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When doing a list with the os-host extension the host name attribute
  is host_name in all other cases the host name attribute is host.

  E.g. when doing a list operation a result like this is expected:

  {"hosts": [
    {"zone": "internal", "host_name": "awesome-node1", "service": "compute"}]}

  When doing a describe of the same host:

  {"host": [{"resource": {"project": "(total)", "memory_mb": 193278, "host": 
"awesome-node1", "cpu": 48, "disk_gb": 98}},
    {"resource": {"project": "(used_now)", "memory_mb": 13312, "host": 
"awesome-node1", "cpu": 6, "disk_gb": 20}},
    {"resource": {"project": "(used_max)", "memory_mb": 12288, "host": 
"awesome-node1", "cpu": 6, "disk_gb": 20}},
    {"resource": {"project": "de59ee29134b4980bbb77608347ae08a", 
"memory_mb": 12288, "host": "awesome-node1", "cpu": 6, "disk_gb": 20}}]}

  This is confusing at best.

  It has already caused some problems in the official client library.

  
https://github.com/openstack/python-novaclient/commit/9ce03a98eb78652fd3480cb0d8323520fd78064c
  
https://github.com/openstack/python-novaclient/commit/73a0e7298aeb7ff43e70a865d2350923d269db69

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378514] Re: Allow setting max downtime for libvirt live migrations

2015-02-12 Thread Davanum Srinivas (DIMS)
Looks like Chris does not need this anymore :)

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378514

Title:
  Allow setting max downtime for libvirt live migrations

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  As of libvirt 1.2.9, the maximum downtime for a live migration is
  tunable during a migration, so it doesn't require any threading
  foolishness. We should make this configurable in nova.conf so that
  large instances can be migrated across relatively smaller network
  pipes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421287] [NEW] [data processing] Remove css from templates

2015-02-12 Thread Chad Roberts
Public bug reported:

* low priority, technical debt item *

There are a couple places in the Data Processing panels where there is
css defined inside of the templates.  Ideally, existing css rules can be
applied to achieve the desired results, but it is possible that we may
need to add new styles to the projects css definitions.

The places where css is defined in data processing templates are:
.../horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/templates/data_processing.job_binaries/job_binaries.html


  #id_job_binary_url {
width: 200px !important; }
  .form-help-block {
float: left;
text-align: left;
width: 300px; }


and

.../horizon/openstack_dashboard/dashboards/project/data_processing/jobs/templates/data_processing.jobs/jobs.html


.job_origin_main, .job_origin_lib {
width: 200px !important; }
.job_binary_add_button, .job_binary_remove_button {
width: 80px !important;
margin-left: 5px; }
.form-help-block {
float: left;
text-align: left;
width: 300px; }
.lib-input-div {
float:left;
width:320px; }
.job-libs-display {
float:left; }
.actions_column {
width: 210px !important; }


** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421287

Title:
  [data processing] Remove css from templates

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  * low priority, technical debt item *

  There are a couple places in the Data Processing panels where there is
  css defined inside of the templates.  Ideally, existing css rules can
  be applied to achieve the desired results, but it is possible that we
  may need to add new styles to the projects css definitions.

  The places where css is defined in data processing templates are:
  
.../horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/templates/data_processing.job_binaries/job_binaries.html

  
#id_job_binary_url {
  width: 200px !important; }
.form-help-block {
  float: left;
  text-align: left;
  width: 300px; }
  

  and

  
.../horizon/openstack_dashboard/dashboards/project/data_processing/jobs/templates/data_processing.jobs/jobs.html

  
  .job_origin_main, .job_origin_lib {
  width: 200px !important; }
  .job_binary_add_button, .job_binary_remove_button {
  width: 80px !important;
  margin-left: 5px; }
  .form-help-block {
  float: left;
  text-align: left;
  width: 300px; }
  .lib-input-div {
  float:left;
  width:320px; }
  .job-libs-display {
  float:left; }
  .actions_column {
  width: 210px !important; }
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421274] [NEW] image creation fails

2015-02-12 Thread Matthias Runge
Public bug reported:

In recent git checkout, image creation fails:

[12/Feb/2015 15:44:57] "POST /project/images/create/ HTTP/1.1" 200 0
Failed to remove temporary image file /tmp/tmp_WSLQp.upload ([Errno 2] No such 
file or directory: '/tmp/tmp_WSLQp.upload')
Unhandled exception in thread started by 
Traceback (most recent call last):
  File "/home/mrunge/work/horizon/openstack_dashboard/api/glance.py", line 112, 
in image_update
exceptions.handle(request, ignore=True)
  File "/home/mrunge/work/horizon/horizon/exceptions.py", line 364, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/mrunge/work/horizon/openstack_dashboard/api/glance.py", line 110, 
in image_update
image = glanceclient(request).images.update(image_id, **kwargs)
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/v1/images.py",
 line 329, in update
resp, body = self.client.put(url, headers=hdrs, data=image_data)
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/common/http.py",
 line 265, in put
return self._request('PUT', url, **kwargs)
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/common/http.py",
 line 206, in _request
**kwargs)
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 461, in request
resp = self.send(prep, **send_kwargs)
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 573, in send
r = adapter.send(request, **kwargs)
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/requests/adapters.py",
 line 390, in send
for i in request.body:
  File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/common/http.py",
 line 170, in chunk_body
chunk = body.read(CHUNKSIZE)
ValueError: I/O operation on closed file

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421274

Title:
  image creation fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In recent git checkout, image creation fails:

  [12/Feb/2015 15:44:57] "POST /project/images/create/ HTTP/1.1" 200 0
  Failed to remove temporary image file /tmp/tmp_WSLQp.upload ([Errno 2] No 
such file or directory: '/tmp/tmp_WSLQp.upload')
  Unhandled exception in thread started by 
  Traceback (most recent call last):
File "/home/mrunge/work/horizon/openstack_dashboard/api/glance.py", line 
112, in image_update
  exceptions.handle(request, ignore=True)
File "/home/mrunge/work/horizon/horizon/exceptions.py", line 364, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/home/mrunge/work/horizon/openstack_dashboard/api/glance.py", line 
110, in image_update
  image = glanceclient(request).images.update(image_id, **kwargs)
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/v1/images.py",
 line 329, in update
  resp, body = self.client.put(url, headers=hdrs, data=image_data)
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/common/http.py",
 line 265, in put
  return self._request('PUT', url, **kwargs)
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/common/http.py",
 line 206, in _request
  **kwargs)
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 461, in request
  resp = self.send(prep, **send_kwargs)
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 573, in send
  r = adapter.send(request, **kwargs)
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/requests/adapters.py",
 line 390, in send
  for i in request.body:
File 
"/home/mrunge/work/horizon/.venv/lib/python2.7/site-packages/glanceclient/common/http.py",
 line 170, in chunk_body
  chunk = body.read(CHUNKSIZE)
  ValueError: I/O operation on closed file

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421272] [NEW] Hyper-V: Attribute error when trying to spawn instance from vhd image

2015-02-12 Thread Adelina Tuvenie
Public bug reported:

When trying to boot an instance from a vhd image we get:

AttributeError: 'NoneType' object has no attribute  "root_gb"

This happens when we try to get the root disk size from the old flavor.
Since on creation there is no old flavor, instance.get_flavor will
return None and this the AttributeError when trying to return the
root_gb.

** Affects: nova
 Importance: Undecided
 Assignee: Adelina Tuvenie (atuvenie)
 Status: New


** Tags: hyper-v

** Changed in: nova
 Assignee: (unassigned) => Adelina Tuvenie (atuvenie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421272

Title:
  Hyper-V: Attribute error when trying to spawn instance from vhd image

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to boot an instance from a vhd image we get:

  AttributeError: 'NoneType' object has no attribute  "root_gb"

  This happens when we try to get the root disk size from the old
  flavor. Since on creation there is no old flavor, instance.get_flavor
  will return None and this the AttributeError when trying to return the
  root_gb.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421250] [NEW] date picker in metering missing

2015-02-12 Thread Matthias Runge
Public bug reported:

in recent git checkout (Feb 12th, 2015), go to admin/metering, hit
modify usage parameters, select "other as timespan. There should pop up
a date picker for each date.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: javascript low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421250

Title:
  date picker in metering missing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  in recent git checkout (Feb 12th, 2015), go to admin/metering, hit
  modify usage parameters, select "other as timespan. There should pop
  up a date picker for each date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412855] Re: Horizon logs in with unencrypted credentials

2015-02-12 Thread Bogdan Dobrelya
This bug should be superseded by
https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints

** Changed in: fuel/6.0.x
Milestone: None => 6.0.1

** Changed in: fuel/5.1.x
Milestone: None => 5.1.2

** Changed in: fuel/5.0.x
Milestone: None => 5.0.3

** Changed in: fuel/6.0.x
 Assignee: (unassigned) => Fuel Library Team (fuel-library)

** Changed in: fuel/5.1.x
 Assignee: (unassigned) => Fuel Library Team (fuel-library)

** Changed in: fuel/5.0.x
 Assignee: (unassigned) => Fuel Library Team (fuel-library)

** Changed in: fuel/6.0.x
   Status: Triaged => Won't Fix

** Changed in: fuel/5.1.x
   Status: Triaged => Won't Fix

** Changed in: fuel/5.0.x
   Status: Triaged => Won't Fix

** Changed in: fuel/6.1.x
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412855

Title:
  Horizon logs in with unencrypted credentials

Status in Fuel: OpenStack installer that works:
  Triaged
Status in Fuel for OpenStack 5.0.x series:
  Won't Fix
Status in Fuel for OpenStack 5.1.x series:
  Won't Fix
Status in Fuel for OpenStack 6.0.x series:
  Won't Fix
Status in Fuel for OpenStack 6.1.x series:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Horizon logs-in with  unencrypted credentials over HTTP.

  Steps:
  1) Open browser development tools.
  2) Log-in to Horizon
  3) Find POST request with "/horizon/auth/login" path.

  Request details:

  Remote Address:172.16.0.2:80
  Request URL:http://172.16.0.2/horizon/auth/login/
  Request Method:POST
  Status Code:302 FOUND
  Form Data:
  
csrfmiddlewaretoken=ulASpgYAsaikVCWsBxH6kFN2MECpaT9Y®ion=http%3A%2F%2F192.168.0.1%3A5000%2Fv2.0&username=admin&password=admin

  Actual: security settings are applied on stage of product deployment

  Expected: use HTTPS by default to improve infrastructure security on
  stage of installation and deployment.

  Environment:
  Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"
  Dashboard Version: 2014.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1412855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412855] Re: Horizon logs in with unencrypted credentials

2015-02-12 Thread Bogdan Dobrelya
https://github.com/stackforge/puppet-horizon supports SSL starting from
3.0, hence triaged

** No longer affects: fuel/7.0.x

** Also affects: fuel/5.0.x
   Importance: Undecided
   Status: New

** Also affects: fuel/6.0.x
   Importance: Undecided
   Status: New

** Also affects: fuel/5.1.x
   Importance: Undecided
   Status: New

** Changed in: fuel/6.0.x
   Status: New => Triaged

** Changed in: fuel/5.1.x
   Status: New => Triaged

** Changed in: fuel/5.0.x
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412855

Title:
  Horizon logs in with unencrypted credentials

Status in Fuel: OpenStack installer that works:
  Triaged
Status in Fuel for OpenStack 5.0.x series:
  Won't Fix
Status in Fuel for OpenStack 5.1.x series:
  Won't Fix
Status in Fuel for OpenStack 6.0.x series:
  Won't Fix
Status in Fuel for OpenStack 6.1.x series:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Horizon logs-in with  unencrypted credentials over HTTP.

  Steps:
  1) Open browser development tools.
  2) Log-in to Horizon
  3) Find POST request with "/horizon/auth/login" path.

  Request details:

  Remote Address:172.16.0.2:80
  Request URL:http://172.16.0.2/horizon/auth/login/
  Request Method:POST
  Status Code:302 FOUND
  Form Data:
  
csrfmiddlewaretoken=ulASpgYAsaikVCWsBxH6kFN2MECpaT9Y®ion=http%3A%2F%2F192.168.0.1%3A5000%2Fv2.0&username=admin&password=admin

  Actual: security settings are applied on stage of product deployment

  Expected: use HTTPS by default to improve infrastructure security on
  stage of installation and deployment.

  Environment:
  Fuel "build_id": "2014-12-26_14-25-46","release": "6.0"
  Dashboard Version: 2014.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1412855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412802] Re: copy-from broken for large files and swift

2015-02-12 Thread nikhil komawar
** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance-store
   Status: New => In Progress

** Changed in: glance-store
 Assignee: (unassigned) => Stuart McLaren (stuart-mclaren)

** Changed in: glance-store
Milestone: None => v0.1.11

** Changed in: glance-store
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412802

Title:
  copy-from broken for large files and swift

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  In Progress
Status in Glance juno series:
  In Progress
Status in OpenStack Glance backend store-drivers library (glance_store):
  In Progress

Bug description:
  Glance may loose some image data while transferring it to the backing
  store thus corrupting the image when ALL the following conditions are
  met:

  - Image is being created by copying data from remote source (--copy-from CLI 
parameter or appropriate API call)
  - Backing store is Swift
  - Image size is larger then configured "swift_store_large_object_size"

  In such scenarios the last chunk stored in Swift will have the size
  significantly less then expected. An attempt to download the image
  will result in a checksum verification error, however the checksum
  stored in Glance (with image metadata) is correct, and so is the size.

  This is easily reproducible even on devstack (if the devstack is
  configured to run Swift as Glance backend). Just decrease
  'swift_store_large_object_size' to some reasonably low value (i.e. 200
  Mb) and try to copy-from any image which is larger then that value.
  After the upload is successful, check the object size in swift (by
  either summing the sizes of all the chunks or by looking to the size
  of virtual large object) - they will be lower then expected:

  
  glance image-create --name tst --disk-format qcow2 --container-format bare 
--copy-from http://192.168.56.1:8000/F18-x86_64-cfntools.qcow2

  ...

  glance image-list
  
+--+-+-+--+---++
  | ID   | Name| 
Disk Format | Container Format | Size  | Status |
  
+--+-+-+--+---++
  | fc34ec49-4bd3-40dd-918f-44d3254f2ac9 | tst | 
qcow2   | bare | 536412160 | active |
  
+--+-+-+--+---++

  ...

  swift stat glance fc34ec49-4bd3-40dd-918f-44d3254f2ac9 --os-tenant-name 
service --os-username admin
 Account: AUTH_cce6e9c12fa34c63b64ef29e84861554
   Container: glance
  Object: fc34ec49-4bd3-40dd-918f-44d3254f2ac9
Content Type: application/octet-stream
  Content Length: 509804544 < see, the size is 
different!
   Last Modified: Mon, 19 Jan 2015 15:52:18 GMT
ETag: "6d0612f82db9a531b34d25823a45073d"
Manifest: glance/fc34ec49-4bd3-40dd-918f-44d3254f2ac9-
   Accept-Ranges: bytes
 X-Timestamp: 1421682737.01148
  X-Trans-Id: tx01a19f7476a541808c9a1-0054bd28e1

  

  glance image-download tst --file out.qcow2
  [Errno 32] Corrupt image download. Checksum was 
0eeddae1007f01b0029136d28518f538 expected 3ecddfe0787a392960d230c87a421c6a

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1412802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images

2015-02-12 Thread Davanum Srinivas (DIMS)
Marking as Wont-Fix.

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368815

Title:
  qemu-img convert intermittently corrupts output images

Status in Cinder:
  Triaged
Status in OpenStack Compute (Nova):
  Won't Fix
Status in QEMU:
  In Progress
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Fix Released
Status in qemu source package in Utopic:
  Fix Released
Status in qemu source package in Vivid:
  Fix Released

Bug description:
  ==
  Impact: occasional image corruption (any format on local filesystem)
  Test case: see the qemu-img command below
  Regression potential: this cherrypicks a patch from upstream to a 
not-insignificantly older qemu source tree.  While the cherrypick seems sane, 
it's possible that there are subtle interactions with the other delta.  I'd 
really like for a full qa-regression-test qemu testcase to be run against this 
package.
  ==

  -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on
  Ubuntu 14.04 using Ext4 filesystems.

  The command

    qemu-img convert -O raw inputimage.qcow2 outputimage.raw

  intermittently creates corrupted output images, when the input image
  is not yet fully synchronized to disk. While the issue has actually
  been discovered in operation of of OpenStack nova, it can be
  reproduced "easily" on command line using

    cat $SRC_PATH > $TMP_PATH && $QEMU_IMG_PATH convert -O raw $TMP_PATH
  $DST_PATH && cksum $DST_PATH

  on filesystems exposing this behavior. (The difficult part of this
  exercise is to prepare a filesystem to reliably trigger this race. On
  my test machine some filesystems are affected while other aren't, and
  unfortunately I haven't found the relevant difference between them,
  yet. Possible it's timing issues completely out of userspace control
  ...)

  The root cause, however, is the same as in

    http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html

  and it can be solved the same way as suggested in

    http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html

  In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change

  f.fm.fm_flags = 0;

  to

  f.fm.fm_flags = FIEMAP_FLAG_SYNC;

  As discussed in the thread mentioned above, retrieving a page cache
  coherent map of file extents is possible only after fsync on that
  file.

  See also

    https://bugs.launchpad.net/nova/+bug/1350766

  In that bug report filed against nova, fsync had been suggested to be
  performed by the framework invoking qemu-img. However, as the choice
  of fiemap -- implying this otherwise unneeded fsync of a temporary
  file  -- is not made by the caller but by qemu-img, I agree with the
  nova bug reviewer's objection to put it into nova. The fsync should
  instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC,
  specifically intended for that purpose.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346658] Re: All DB model classes should be consolidated into one directory

2015-02-12 Thread Ann Kamyshnikova
Is this still needed as we have Core and Vendor code decomposition?

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346658

Title:
  All DB model classes should be consolidated into one directory

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  We have discussed moving all models out of their current diverse
  locations to one directory, like maybe

neutron/db/models/*.py

  The idea is to move just the model classes (not the entire modules
  that they currently reside in) here. Then head.py would be able to

from neutron.db.models import *  # noqa

  and this would have much less baggage than importing all the current
  modules.

  The convention of putting all models in one directory will be quite
  easy to follow and maintain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421239] [NEW] LBaaS- overwrite when associate 2 different Vip's to same external IP

2015-02-12 Thread Eran Kuris
Public bug reported:

I created in LBaaS  2  VIP's .
I  associate one vip (192.168.1.8) to external IP  (10.35.166.4).
When I try to associate the second vip(192.168.1.9) to same external IP 
10.35.166.4  I success but it overwrite the first vip  , there is no warning 
message. 
We should prevent it as we cannot associate VM to external IP that already 
associated to other VM (we get an error message : Conflict (HTTP 409) 
(Request-ID: req-35eeddc9-d264-4fc5-a336-d201e4a66231 ])

**If we not prevent it we should send warning message about it .


How to Reproduce : 
Create in your tenant 2 VM's  with internal network & external network.

1. Create LB pool on subnet of tenant A
>

2. Log into VM1 and VM2, enable httpd


3. Create 2 members into webpool (use VM’s Ips)
 --protocol-port 80 webpool>

4.Create a Healthmonitor and associated it with the webpool



Run:


 webpool

5.Create a VIP for the webpool
 webpool>

6.Check HAproxy created (for example)


7. Create a floating IP from external network


8.Check port id of LB VIP



9.Associate floating IP with the LB VIP
 

10.From external network, try to connect the floating IP by curl 
http://IP>


create for same VM's  VIP  to other service FTP  or SSH  for example  and try 
to associate it to same external floating ip that we used already.

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "LBaaS floating IP .odt"
   
https://bugs.launchpad.net/bugs/1421239/+attachment/4317992/+files/LBaaS%20floating%20IP%20.odt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421239

Title:
  LBaaS- overwrite when associate  2 different Vip's  to same external
  IP

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I created in LBaaS  2  VIP's .
  I  associate one vip (192.168.1.8) to external IP  (10.35.166.4).
  When I try to associate the second vip(192.168.1.9) to same external IP 
10.35.166.4  I success but it overwrite the first vip  , there is no warning 
message. 
  We should prevent it as we cannot associate VM to external IP that already 
associated to other VM (we get an error message : Conflict (HTTP 409) 
(Request-ID: req-35eeddc9-d264-4fc5-a336-d201e4a66231 ])

  **If we not prevent it we should send warning message about it .


  How to Reproduce : 
  Create in your tenant 2 VM's  with internal network & external network.

  1. Create LB pool on subnet of tenant A
  >

  2. Log into VM1 and VM2, enable httpd
  

  3. Create 2 members into webpool (use VM’s Ips)
   --protocol-port 80 webpool>

  4.Create a Healthmonitor and associated it with the webpool

  

  Run:
  

   webpool

  5.Create a VIP for the webpool
   webpool>

  6.Check HAproxy created (for example)
  

  7. Create a floating IP from external network
  

  8.Check port id of LB VIP
  
  

  9.Associate floating IP with the LB VIP
   

  10.From external network, try to connect the floating IP by curl 
  http://IP>

  
  create for same VM's  VIP  to other service FTP  or SSH  for example  and try 
to associate it to same external floating ip that we used already.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421232] [NEW] Restarting neutron openvswitch while having broadcast/multicast traffic going into br-tun makes a broadcast storm over the tunnel network

2015-02-12 Thread Miguel Angel Ajo
Public bug reported:

As a result from the following bug (br-tun being reset across agent
restarts) https://bugs.launchpad.net/neutron/+bug/1383674

If in addition, we have a broadcast or multicast packet jumping into
br-tun from br-int, openvswitch will bring down the network creating
a broadcast storm.

It's necessary to have at least 3 nodes connected via tunnels:

The packets will go:

NodeA -> NodeB -> NodeC -> NodeA

Or more amplified if we had more nodes.

This would be avoided if we re-created br-tun in fail-mode "secure" at least, 
because that doesn't introduce the "NORMAL" default switching rule
on the switch at creation (origin of this problem.)

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421232

Title:
  Restarting neutron openvswitch while having broadcast/multicast
  traffic going into br-tun makes a broadcast storm over the tunnel
  network

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  As a result from the following bug (br-tun being reset across agent
  restarts) https://bugs.launchpad.net/neutron/+bug/1383674

  If in addition, we have a broadcast or multicast packet jumping into
  br-tun from br-int, openvswitch will bring down the network creating
  a broadcast storm.

  It's necessary to have at least 3 nodes connected via tunnels:

  The packets will go:

  NodeA -> NodeB -> NodeC -> NodeA

  Or more amplified if we had more nodes.

  This would be avoided if we re-created br-tun in fail-mode "secure" at least, 
because that doesn't introduce the "NORMAL" default switching rule
  on the switch at creation (origin of this problem.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420809] Re: Inconsistent beween Horizon & Glance image-create help

2015-02-12 Thread Pasquale Porreca
I changed again the status to new, since I noticed that "Opinion" are
filtered in normal search and it should be enabled in advanced search to
be seen. It was not my intention to hide this report

** Changed in: horizon
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420809

Title:
  Inconsistent beween Horizon & Glance image-create  help

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Glance CLI help we can see parameter that are written as "optional" when 
they are mandatory.
  I compare it with horizon WebUI dashboard and I saw which fields are 
mandatory .
  to reproduce the bug in CLI run the command : 
  #glance help image-create

  compare it with WebUI

  Version 
  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep glance 
  python-glanceclient-0.14.2-1.el7ost.noarch
  openstack-glance-2014.2.1-3.el7ost.noarch
  python-glance-2014.2.1-3.el7ost.noarch
  python-glance-store-0.1.10-2.el7ost.noarch

  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep rhel
  libreport-rhel-2.1.11-10.el7.x86_64

  
  **Attatched screenshot - describe compare between CLI & WebUI

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405082] Re: selected networks in launch instance workflow do not aligned well

2015-02-12 Thread Rob Cresswell
Fixed by  https://review.openstack.org/#/c/151976/

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405082

Title:
  selected networks in launch instance workflow do not aligned well

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  selected networks in launch instance workflow do not aligned well, see
  attachment for detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420809] Re: Inconsistent beween Horizon & Glance image-create help

2015-02-12 Thread Eran Kuris
** Summary changed:

- Glance image-create  help in CLI  is not correct
+ Inconsistent beween Horizon & Glance image-create  help

** Tags added: horizon

** Project changed: glance => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420809

Title:
  Inconsistent beween Horizon & Glance image-create  help

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  In Glance CLI help we can see parameter that are written as "optional" when 
they are mandatory.
  I compare it with horizon WebUI dashboard and I saw which fields are 
mandatory .
  to reproduce the bug in CLI run the command : 
  #glance help image-create

  compare it with WebUI

  Version 
  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep glance 
  python-glanceclient-0.14.2-1.el7ost.noarch
  openstack-glance-2014.2.1-3.el7ost.noarch
  python-glance-2014.2.1-3.el7ost.noarch
  python-glance-store-0.1.10-2.el7ost.noarch

  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep rhel
  libreport-rhel-2.1.11-10.el7.x86_64

  
  **Attatched screenshot - describe compare between CLI & WebUI

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421166] [NEW] admin state column is missing in router table

2015-02-12 Thread Masco Kaliyamoorthy
Public bug reported:

admin state column is missing in router table

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421166

Title:
  admin state column is missing in router table

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  admin state column is missing in router table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421155] [NEW] project instance limit also applicable for compute node

2015-02-12 Thread Sagar Shukla
Public bug reported:

I am running OpenStack Juno release, and am running it on high-capacity compute 
nodes. I created a few VMs of small sizes and after running 10 VMs on a compute 
node, new VM addition failed. It just gave a warning message in the logs which 
did not provide any indication of what the issue is. Following is a snippet of 
the nova-compute.log file on the compute node.
---
2015-02-12 06:49:23.288 16846 WARNING oslo.messaging._drivers.amqpdriver [-] 
Number of call queues is greater than warning threshold: 10. There could be a 
leak.
2015-02-12 06:49:46.622 16846 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2015-02-12 06:49:48.126 16846 AUDIT nova.compute.resource_tracker [-] Total 
physical ram (MB): 200913, total allocated virtual ram (MB): 16896
---

After increasing the threshold limit of MaxInstances I was able to
create new VMs.

Ideal behavior should be:
(1) Limit of number of VMs on compute node should be dependent on available 
resources rather than number of VMs quota for a project.
(2) In worst case, there should be a different limiting parameter to govern 
number of VMs on a compute host.
(3) Error logs should be more informative.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421155

Title:
  project instance limit also applicable for compute node

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am running OpenStack Juno release, and am running it on high-capacity 
compute nodes. I created a few VMs of small sizes and after running 10 VMs on a 
compute node, new VM addition failed. It just gave a warning message in the 
logs which did not provide any indication of what the issue is. Following is a 
snippet of the nova-compute.log file on the compute node.
  ---
  2015-02-12 06:49:23.288 16846 WARNING oslo.messaging._drivers.amqpdriver [-] 
Number of call queues is greater than warning threshold: 10. There could be a 
leak.
  2015-02-12 06:49:46.622 16846 AUDIT nova.compute.resource_tracker [-] 
Auditing locally available compute resources
  2015-02-12 06:49:48.126 16846 AUDIT nova.compute.resource_tracker [-] Total 
physical ram (MB): 200913, total allocated virtual ram (MB): 16896
  ---

  After increasing the threshold limit of MaxInstances I was able to
  create new VMs.

  Ideal behavior should be:
  (1) Limit of number of VMs on compute node should be dependent on available 
resources rather than number of VMs quota for a project.
  (2) In worst case, there should be a different limiting parameter to govern 
number of VMs on a compute host.
  (3) Error logs should be more informative.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421132] [NEW] test_rebuild_availability_range is failing from time to time

2015-02-12 Thread Rossella Sblendido
Public bug reported:

Functional test test_rebuild_availability_range is failing quite often
with the following stacktrace:

2015-02-11 02:48:31.256 | 2015-02-11 02:48:29.890 | Traceback (most recent 
call last):
2015-02-11 02:48:31.256 | 2015-02-11 02:48:29.892 |   File 
"neutron/tests/functional/db/test_ipam.py", line 198, in 
test_rebuild_availability_range
2015-02-11 02:48:31.257 | 2015-02-11 02:48:29.893 | 
self._create_port(self.port_id)
2015-02-11 02:48:31.257 | 2015-02-11 02:48:29.894 |   File 
"neutron/tests/functional/db/test_ipam.py", line 128, in _create_port
2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.896 | 
self.plugin.create_port(self.cxt, {'port': port})
2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.897 |   File 
"neutron/db/db_base_plugin_v2.py", line 1356, in create_port
2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.898 | context, 
ip_address, network_id, subnet_id, port_id)
2015-02-11 02:48:31.259 | 2015-02-11 02:48:29.900 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 470, 
in __exit__
2015-02-11 02:48:31.259 | 2015-02-11 02:48:29.901 | self.rollback()
2015-02-11 02:48:31.260 | 2015-02-11 02:48:29.902 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
2015-02-11 02:48:31.260 | 2015-02-11 02:48:29.904 | 
compat.reraise(exc_type, exc_value, exc_tb)
2015-02-11 02:48:35.598 | 2015-02-11 02:48:29.905 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 467, 
in __exit__
2015-02-11 02:48:35.599 | 2015-02-11 02:48:29.906 | self.commit()
2015-02-11 02:48:35.600 | 2015-02-11 02:48:29.907 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 377, 
in commit
2015-02-11 02:48:35.601 | 2015-02-11 02:48:29.909 | self._prepare_impl()
2015-02-11 02:48:35.601 | 2015-02-11 02:48:29.910 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 357, 
in _prepare_impl
2015-02-11 02:48:35.602 | 2015-02-11 02:48:29.911 | self.session.flush()
2015-02-11 02:48:35.603 | 2015-02-11 02:48:29.913 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1919, 
in flush
2015-02-11 02:48:35.603 | 2015-02-11 02:48:29.914 | self._flush(objects)
2015-02-11 02:48:35.604 | 2015-02-11 02:48:29.915 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2037, 
in _flush
2015-02-11 02:48:35.605 | 2015-02-11 02:48:29.917 | 
transaction.rollback(_capture_exception=True)
2015-02-11 02:48:35.605 | 2015-02-11 02:48:29.918 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
2015-02-11 02:48:35.606 | 2015-02-11 02:48:29.919 | 
compat.reraise(exc_type, exc_value, exc_tb)
2015-02-11 02:48:35.607 | 2015-02-11 02:48:29.921 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2001, 
in _flush
2015-02-11 02:48:35.608 | 2015-02-11 02:48:29.922 | 
flush_context.execute()
2015-02-11 02:48:35.608 | 2015-02-11 02:48:29.923 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
372, in execute
2015-02-11 02:48:35.609 | 2015-02-11 02:48:29.925 | rec.execute(self)
2015-02-11 02:48:35.610 | 2015-02-11 02:48:29.926 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
526, in execute
2015-02-11 02:48:35.610 | 2015-02-11 02:48:29.927 | uow
2015-02-11 02:48:35.611 | 2015-02-11 02:48:29.929 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
46, in save_obj
2015-02-11 02:48:35.612 | 2015-02-11 02:48:29.930 | uowtransaction)
2015-02-11 02:48:35.612 | 2015-02-11 02:48:29.931 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
171, in _organize_states_for_save
2015-02-11 02:48:35.613 | 2015-02-11 02:48:29.933 | 
state_str(existing)))
2015-02-11 02:48:35.614 | 2015-02-11 02:48:29.934 | FlushError: New 
instance  with identity key (, (u'10.10.10.2', u'test_sub_id', 
'test_net_id')) conflicts with persistent instance 


See for example 
http://logs.openstack.org/35/149735/4/gate/gate-neutron-dsvm-functional/fc960fe/console.html

Logstack query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmx1c2hFcnJvclwiICBBTkQgdGFnczpcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIzNzMzNTU0NDc5LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421132

Title:
  test_rebuild_availability_range is fail

[Yahoo-eng-team] [Bug 1421128] [NEW] improve the router create form

2015-02-12 Thread Masco Kaliyamoorthy
Public bug reported:

router create a form is not complete.
it is missing the below items,
optional admin state and external network parameters
help text

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421128

Title:
  improve the router create form

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  router create a form is not complete.
  it is missing the below items,
  optional admin state and external network parameters
  help text

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420597] Re: metadata missed after aggregate's az updated

2015-02-12 Thread ugvddm
@Eric, I can't reproduce it in my devstack for last code, everything is
OK,  you should indicate which version you are, :)


** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420597

Title:
  metadata missed after aggregate's az updated

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  After changing aggregate's availability_zone to another, the other
  metadatas of this aggregate missed.

  Reproduce:
  1. create one aggregate which belong to "nova";
  # nova aggregate-create hagg-test nova
  +-+---+---+---+--+
  | Id  | Name  | Availability Zone | Hosts | Metadata |
  +-+---+---+---+--+
  | 134 | hagg-test | nova  |   | 'availability_zone=nova' |
  +-+---+---+---+--+
  2. set metadata: foo=bar
  # nova aggregate-set-metadata hagg-test foo=bar
  Metadata has been successfully updated for aggregate 134.
  
+-+---+---+---+-+
  | Id  | Name  | Availability Zone | Hosts | Metadata  
  |
  
+-+---+---+---+-+
  | 134 | hagg-test | nova  |   | 'availability_zone=nova', 
'foo=bar' |
  
+-+---+---+---+-+
  3. change the availability_zone
  # nova aggregate-update 134 hagg-test az-test
  Aggregate 136 has been successfully updated.
  +-+---+---+---+-+
  | Id  | Name  | Availability Zone | Hosts | Metadata|
  +-+---+---+---+-+
  | 136 | hagg-test | az-test   |   | 'availability_zone=az-test' |
  +-+---+---+---+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421105] [NEW] L2 population sometimes failed with multiple neutron-server

2015-02-12 Thread shihanzhang
Public bug reported:

In my environment with two neutron-server, 'mechanism_drivers' is openvswitch, 
l2 population is set.
When I delete a VM which is the network-A  last VM in compute node-A, I found a 
KeyError in  compute node-B openvswitch-agent log, it throws by 'del_fdb_flow':

def del_fdb_flow(self, br, port_info, remote_ip, lvm, ofport):
if port_info == q_const.FLOODING_ENTRY:
lvm.tun_ofports.remove(ofport)
if len(lvm.tun_ofports) > 0:
ofports = _ofport_set_to_str(lvm.tun_ofports)
br.mod_flow(table=constants.FLOOD_TO_TUN,
dl_vlan=lvm.vlan,
actions="strip_vlan,set_tunnel:%s,output:%s" %
(lvm.segmentation_id, ofports))

the reason is that openvswitch-agent  receives two RPC request
'fdb_remove', why it receives twice, I think the reason is that:

there are two neutron-server: neutron-serverA, neutron-serverB, one compute 
node-A
1. nova delete VM which is in compute node-A, it will firstly delete the TAP 
device, then the ovs scans the port is deleted, it send RPC request 
'update_device_down' to  neutron-serverA, when neutron-serverA receive this 
request, l2 population will firstly send 'fdb_remove'
2. after nova delete the TAP device, it send REST API request 'delete_port' to 
neutron-serveB, the l2 population send second 'fdb_remove' RPC request
when ovs agent receive the second  'fdb_remove', it del_fdb_flow, the 
'lvm.tun_ofports.remove(ofport)' throw KeyError, because 
the ofport is deleted in first request

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421105

Title:
  L2 population sometimes failed with multiple neutron-server

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In my environment with two neutron-server, 'mechanism_drivers' is 
openvswitch, l2 population is set.
  When I delete a VM which is the network-A  last VM in compute node-A, I found 
a KeyError in  compute node-B openvswitch-agent log, it throws by 
'del_fdb_flow':

  def del_fdb_flow(self, br, port_info, remote_ip, lvm, ofport):
  if port_info == q_const.FLOODING_ENTRY:
  lvm.tun_ofports.remove(ofport)
  if len(lvm.tun_ofports) > 0:
  ofports = _ofport_set_to_str(lvm.tun_ofports)
  br.mod_flow(table=constants.FLOOD_TO_TUN,
  dl_vlan=lvm.vlan,
  actions="strip_vlan,set_tunnel:%s,output:%s" %
  (lvm.segmentation_id, ofports))

  the reason is that openvswitch-agent  receives two RPC request
  'fdb_remove', why it receives twice, I think the reason is that:

  there are two neutron-server: neutron-serverA, neutron-serverB, one compute 
node-A
  1. nova delete VM which is in compute node-A, it will firstly delete the TAP 
device, then the ovs scans the port is deleted, it send RPC request 
'update_device_down' to  neutron-serverA, when neutron-serverA receive this 
request, l2 population will firstly send 'fdb_remove'
  2. after nova delete the TAP device, it send REST API request 'delete_port' 
to neutron-serveB, the l2 population send second 'fdb_remove' RPC request
  when ovs agent receive the second  'fdb_remove', it del_fdb_flow, the 
'lvm.tun_ofports.remove(ofport)' throw KeyError, because 
  the ofport is deleted in first request

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421098] [NEW] ofagent: test_update_instance_port_admin_state failure

2015-02-12 Thread YAMAMOTO Takashi
Public bug reported:

recently introduced tempest test_update_instance_port_admin_state test
case uncovered a bug in ofagent.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress


** Tags: openflowagent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421098

Title:
  ofagent: test_update_instance_port_admin_state failure

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  recently introduced tempest test_update_instance_port_admin_state test
  case uncovered a bug in ofagent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp