[Yahoo-eng-team] [Bug 1391827] Re: nova-manage service list should not be allowed for a tenant

2014-11-17 Thread Itzik Brown
Agree.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391827

Title:
  nova-manage service list should not be allowed for a tenant

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  nova-manage service list is an administration command and a tenant should not 
be able to run as a tenant.
  When running as a tenant user ( role _member_) 'nova-manage service list ' 
shows the normal output as the one seen when running as 'admin'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393329] [NEW] Trailing whitespaces pass IP address validation

2014-11-17 Thread Hironori Shiina
Public bug reported:

Trailing whitespaces of IP address are not detected in the validation.
These whitespaces cause some troubles later.

In the following case, '\r' in the IP address is not detected.
# neutron subnet-show ----
+--+--+
| Field   | Value   
 |
+--+--+
| allocation_pools | {start: 10.1.1.240\r, end: 10.1.1.250} |
 :

** Affects: neutron
 Importance: Undecided
 Assignee: Hironori Shiina (shiina-hironori)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Hironori Shiina (shiina-hironori)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393329

Title:
  Trailing whitespaces pass IP address validation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Trailing whitespaces of IP address are not detected in the validation.
  These whitespaces cause some troubles later.

  In the following case, '\r' in the IP address is not detected.
  # neutron subnet-show ----
  +--+--+
  | Field   | Value 
   |
  +--+--+
  | allocation_pools | {start: 10.1.1.240\r, end: 10.1.1.250} |
   :

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224737] Re: alembic alter_column name arg deprecated

2014-11-17 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1224737

Title:
  alembic alter_column name arg deprecated

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  /usr/local/lib/python2.7/dist-packages/alembic/util.py:272: UserWarning: 
Argument 'name' is now named 'new_column_name' for function 'alter_column'
(oldname, newname, fn.__name__))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1224737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393345] [NEW] How auto adjust the dimensions of the noVNC window image in horizon

2014-11-17 Thread Shiju
Public bug reported:

noVNC console of an Instance in Openstack Icehouse is giving a
scrollable webinterface,How to auto resize the same to fit the browser
woindow,Browser is Firefox 31.x

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393345

Title:
  How auto adjust the dimensions of the noVNC window image in horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  noVNC console of an Instance in Openstack Icehouse is giving a
  scrollable webinterface,How to auto resize the same to fit the browser
  woindow,Browser is Firefox 31.x

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393362] [NEW] linuxbridge agent is using too much memory

2014-11-17 Thread Darragh O'Reilly
Public bug reported:

When vxlan is configured:

$ ps aux | grep linuxbridge
vagrant  21051  3.2 28.9 504764 433644 pts/3   S+   09:08   0:02 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini


A list with over 16 million numbers is created here:

 for segmentation_id in range(1, constants.MAX_VXLAN_VNI + 1):

https://github.com/openstack/neutron/blob/b5859998bc662569fee4b34fa079b4c37744de2c/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L526

and does not seem to be garbage collected for some reason.

Using xrange instead:

$ ps -aux | grep linuxb
vagrant   7397  0.1  0.9 106412 33236 pts/10   S+   09:19   0:05 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

** Affects: neutron
 Importance: Undecided
 Assignee: Darragh O'Reilly (darragh-oreilly)
 Status: In Progress


** Tags: lb

** Changed in: neutron
 Assignee: (unassigned) = Darragh O'Reilly (darragh-oreilly)

** Changed in: neutron
   Status: New = In Progress

** Tags added: lb

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393362

Title:
  linuxbridge agent is using too much memory

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When vxlan is configured:

  $ ps aux | grep linuxbridge
  vagrant  21051  3.2 28.9 504764 433644 pts/3   S+   09:08   0:02 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

  
  A list with over 16 million numbers is created here:

   for segmentation_id in range(1, constants.MAX_VXLAN_VNI + 1):

  
https://github.com/openstack/neutron/blob/b5859998bc662569fee4b34fa079b4c37744de2c/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L526

  and does not seem to be garbage collected for some reason.

  Using xrange instead:

  $ ps -aux | grep linuxb
  vagrant   7397  0.1  0.9 106412 33236 pts/10   S+   09:19   0:05 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393365] [NEW] cross-manager use of config values for backward compatibility should have deprecation warnings

2014-11-17 Thread Henry Nash
Public bug reported:

There are a few cases where, for backward compatibility, we honor older
config values to ensure that installations don't break on upgrade
between releases.  A good example of this is the 'driver' config setting
from when we split up the original identity manager/backend - as well as
the config values around the new spit of assignment.

We should issue deprecation warnings when the new config values have not
been set and the old one still are set (in which case we use the old
values).  However, the current versionutils.deprecated class doesn't
really support the logging of arbitrary objects (it supports just
classes and functions).  This should be enhanced, and then places where
we do provide this backward compatibility for config values should be so
marked (The __init__ method in the manager class for resource/core.py
and assignment/core.py are good places to start).

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1393365

Title:
  cross-manager use of config values for backward compatibility should
  have deprecation warnings

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There are a few cases where, for backward compatibility, we honor
  older config values to ensure that installations don't break on
  upgrade between releases.  A good example of this is the 'driver'
  config setting from when we split up the original identity
  manager/backend - as well as the config values around the new spit of
  assignment.

  We should issue deprecation warnings when the new config values have
  not been set and the old one still are set (in which case we use the
  old values).  However, the current versionutils.deprecated class
  doesn't really support the logging of arbitrary objects (it supports
  just classes and functions).  This should be enhanced, and then places
  where we do provide this backward compatibility for config values
  should be so marked (The __init__ method in the manager class for
  resource/core.py and assignment/core.py are good places to start).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1393365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206773] Re: XENAPI_PLUGIN_FAILURE', 'download_vhd', 'KeyError', 'args'

2014-11-17 Thread Louis Taylor
This looks like a problem with xenserver, not glance.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1206773

Title:
  XENAPI_PLUGIN_FAILURE', 'download_vhd', 'KeyError', 'args'

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  I meet the below blocked issue when starting a VM with xenserver6.2 and 
openstack E version: 
  Could you please figure out the root cause of this issue?  Your support is 
much appreciated.

  
  My test steps is followed : 
  1.Install one openstack node (all in one with Ubuntu12.04  ) on PV 
instance of xenserver 6.2 
  2.Deploy  the xenapi plugin and make other setting on Dom0
  3.Install python-xenapi on DomU
  4.Integrate compute service of openstack with xenserver (see attached 
nova.conf  file)
  5.Convert cirros-0.3.0-x86_64-disk.img to VHD format and upload to glance 
like this: 
glance add name=cirros-0.3.0-x86_64 is_public=true  
container_format=ovf \
  disk_format=vhd  tarred.tgz
  6.Boot a VM with the above image then fail, this error log is as below. 

  
  In nova-compute.log of domU : 

  9ab972f729f1e1fc] Making asynchronous call on network ... from (pid=1246) 
multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:326
  2013-07-31 03:19:04 DEBUG nova.rpc.amqp 
[req-2639c13b-1cc7-4cc4-b35d-a7190d1fd814 802fe742f0cc45a592ace109696ce597 
65da626d19a842ea9ab972f729f1e1fc] MSG_ID is 83c62dadf8814d13bdfac73149dbd3a4 
from (pid=1246) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:329
  2013-07-31 03:19:05 ERROR nova.rpc.amqp 
[req-2639c13b-1cc7-4cc4-b35d-a7190d1fd814 802fe742f0cc45a592ace109696ce597 
65da626d19a842ea9ab972f729f1e1fc] Exception during message handling
  2013-07-31 03:19:05 TRACE nova.rpc.amqp Traceback (most recent call last):
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data
  2013-07-31 03:19:05 TRACE nova.rpc.amqp rval = node_func(context=ctxt, 
**node_args)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
  2013-07-31 03:19:05 TRACE nova.rpc.amqp return f(*args, **kw)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in 
decorated_function
  2013-07-31 03:19:05 TRACE nova.rpc.amqp sys.exc_info())
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2013-07-31 03:19:05 TRACE nova.rpc.amqp self.gen.next()
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in 
decorated_function
  2013-07-31 03:19:05 TRACE nova.rpc.amqp return function(self, context, 
instance_uuid, *args, **kwargs)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 676, in 
run_instance
  2013-07-31 03:19:05 TRACE nova.rpc.amqp do_run_instance()
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 990, in inner
  2013-07-31 03:19:05 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 675, in 
do_run_instance
  2013-07-31 03:19:05 TRACE nova.rpc.amqp self._run_instance(context, 
instance_uuid, **kwargs)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 476, in 
_run_instance
  2013-07-31 03:19:05 TRACE nova.rpc.amqp 
self._set_instance_error_state(context, instance_uuid)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2013-07-31 03:19:05 TRACE nova.rpc.amqp self.gen.next()
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 457, in 
_run_instance
  2013-07-31 03:19:05 TRACE nova.rpc.amqp self._deallocate_network(context, 
instance)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2013-07-31 03:19:05 TRACE nova.rpc.amqp self.gen.next()
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 454, in 
_run_instance
  2013-07-31 03:19:05 TRACE nova.rpc.amqp injected_files, admin_password)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 617, in _spawn
  2013-07-31 03:19:05 TRACE nova.rpc.amqp 
self._legacy_nw_info(network_info), block_device_info)
  2013-07-31 03:19:05 TRACE nova.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py, line 184, in 

[Yahoo-eng-team] [Bug 1393408] [NEW] hardcoded block device path for NBD server

2014-11-17 Thread Denis M.
Public bug reported:

Nova uses hardcoded block device path for NBD server.
See: 
https://github.com/openstack/nova/blob/master/nova/virt/disk/mount/nbd.py#L63

For example, as deployer i want to pick my own path to NBD storage, but
it's hardcoded. And not clear (no documentation) where should NBD
storage be.

Proper solution is to propose new configuration opt:
Name: nbd_storage_path
Type: String
Default: /sys/blocl/nbd1

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393408

Title:
  hardcoded block device path for NBD server

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova uses hardcoded block device path for NBD server.
  See: 
https://github.com/openstack/nova/blob/master/nova/virt/disk/mount/nbd.py#L63

  For example, as deployer i want to pick my own path to NBD storage,
  but it's hardcoded. And not clear (no documentation) where should NBD
  storage be.

  Proper solution is to propose new configuration opt:
  Name: nbd_storage_path
  Type: String
  Default: /sys/blocl/nbd1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393409] [NEW] In class CiscoCsrIPsecVpnAgentApi used unknown method from L3RouterPlugin

2014-11-17 Thread aalekov
Public bug reported:

During testing of VPNaaS feature I'm found problem with incorrect calls
from one class to another.  Class CiscoCsrIPsecVpnAgentApi try to call
unknown method get_host_for_router from class L3RouterPlugin.

Tempest test is
tempest.api.network.test_vpnaas_extensions.VPNaaSTestJSON.test_create_update_delete_vpn_service[gate,smoke].

 Tempest log:
2014-11-16 20:03:45,861 7747 DEBUG[tempest.common.rest_client] Request 
(VPNaaSTestJSON:test_create_update_delete_vpn_service): 201 POST 
http://192.168.0.84:9696/v2.0/vpn/vpnservices 0.066s
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
Body: {vpnservice: {subnet_id: 
20da093a-3c97-4757-859c-537a4206d967, router_id: 
0e383b58-33c3-4d55-b72a-ae19e76b2f6f, name: vpn-service-1842998599, 
admin_state_up: true}}
Response - Headers: {'status': '201', 'content-length': '322', 
'connection': 'close', 'date': 'Sun, 16 Nov 2014 20:03:45 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-a747adae-5586-4f52-b876-3d072ec28ce9'}
Body: {vpnservice: {router_id: 
0e383b58-33c3-4d55-b72a-ae19e76b2f6f, status: PENDING_CREATE, name: 
vpn-service-1842998599, admin_state_up: true, subnet_id: 
20da093a-3c97-4757-859c-537a4206d967, tenant_id: 
d6e97fa90ddc4ebab91ab57019826728, id: 
a521e5b9-3edf-4236-a0b0-3df3f9dce602, description: }}
2014-11-16 20:03:45,873 7747 DEBUG[tempest.common.rest_client] Request 
(VPNaaSTestJSON:test_create_update_delete_vpn_service): 200 GET 
http://192.168.0.84:9696/v2.0/vpn/vpnservices 0.011s
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
Body: None
Response - Headers: {'status': '200', 'content-length': '632', 
'content-location': 'http://192.168.0.84:9696/v2.0/vpn/vpnservices', 
'connection': 'close', 'date': 'Sun, 16 Nov 2014 20:03:45 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-a1fb0f92-3ab0-4c19-9de1-c17b603afd41'}
Body: {vpnservices: [{router_id: 
0e383b58-33c3-4d55-b72a-ae19e76b2f6f, status: PENDING_CREATE, name: 
vpnservice--691534310, admin_state_up: true, subnet_id: 
20da093a-3c97-4757-859c-537a4206d967, tenant_id: 
d6e97fa90ddc4ebab91ab57019826728, id: 
5a1e5e70-069b-462a-bc13-6c4b0945228f, description: }, {router_id: 
0e383b58-33c3-4d55-b72a-ae19e76b2f6f, status: PENDING_CREATE, name: 
vpn-service-1842998599, admin_state_up: true, subnet_id: 
20da093a-3c97-4757-859c-537a4206d967, tenant_id: 
d6e97fa90ddc4ebab91ab57019826728, id: 
a521e5b9-3edf-4236-a0b0-3df3f9dce602, description: }]}
2014-11-16 20:03:45,916 7747 DEBUG[tempest.common.rest_client] Request 
(VPNaaSTestJSON:_run_cleanups): 500 DELETE 
http://192.168.0.84:9696/v2.0/vpn/vpnservices/a521e5b9-3edf-4236-a0b0-3df3f9dce602
 0.038s
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
Body: None
Response - Headers: {'status': '500', 'content-length': '150', 
'connection': 'close', 'date': 'Sun, 16 Nov 2014 20:03:45 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-1ab34864-f9bf-45ec-a870-439f88b4b974'}
Body: {NeutronError: {message: Request Failed: internal server 
error while processing your request., type: HTTPInternalServerError, 
detail: }}
}}}

Traceback (most recent call last):
  File tempest/api/network/test_vpnaas_extensions.py, line 86, in 
_delete_vpn_service
self.client.delete_vpnservice(vpn_service_id)
  File tempest/services/network/network_client_base.py, line 124, in _delete
resp, body = self.delete(uri)
  File tempest/services/network/network_client_base.py, line 83, in delete
return self.rest_client.delete(uri, headers)
  File tempest/common/rest_client.py, line 240, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File tempest/common/rest_client.py, line 454, in request
resp, resp_body)
  File tempest/common/rest_client.py, line 550, in _error_checker
raise exceptions.ServerFault(message)
ServerFault: Got server fault
Details: Request Failed: internal server error while processing your request.
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout


Neutron log:
2014-11-16 20:03:46.133 [00;32mDEBUG neutron.policy 
[[01;36mreq-7e22a57a-977a-4a8d-8369-d4df3ecc3fcd 
[00;36mVPNaaSTestJSON-584604556 d6e97fa90ddc4ebab91ab57019826728[00;32m] 
[01;35m[00;32mEnforcing rules: ['delete_vpnservice'][00m [00;33mfrom (pid=4825) 
_build_match_rule /opt/stack/neutron/neutron/policy.py:232[00m
2014-11-16 20:03:46.165 [01;31mERROR neutron.api.v2.resource 
[[01;36mreq-7e22a57a-977a-4a8d-8369-d4df3ecc3fcd 
[00;36mVPNaaSTestJSON-584604556 d6e97fa90ddc4ebab91ab57019826728[01;31m] 
[01;35m[01;31mdelete failed[00m
[01;31m2014-11-16 20:03:46.165 TRACE neutron.api.v2.resource 
[01;35m[00mTraceback 

[Yahoo-eng-team] [Bug 1261631] Re: Reconnect on failure for multiple servers always connects to first server

2014-11-17 Thread Ian Cordasco
Glance has been using oslo.messaging 1.3.0 or greater since Icehouse.

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1261631

Title:
  Reconnect on failure for multiple servers always connects to first
  server

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in oslo-incubator havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  In attempting to reconnect to an AMQP server when a communication
  failure occurs, both the qpid and rabbit drivers target the configured
  servers in the order in which they were provided.  If a connection to
  the first server had failed, the subsequent reconnection attempt would
  be made to that same server instead of trying one that had not yet
  failed.  This could increase the time to failover to a working server.

  A plausible workaround for qpid would be to decrease the value for
  qpid_timeout, but since the problem only occurs if the failed server
  is the first configured, the results of the workaround would depend on
  the order that the failed server appears in the configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1261631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393435] [NEW] Subnet delete for IPv6 SLAAC should not require prior port disassoc

2014-11-17 Thread Dane LeBlanc
Public bug reported:

With the current Neutron implementation, a subnet cannot be deleted
until all associated IP addresses have been removed from ports (via
port update) or the associated ports/VMs have been deleted. 

In the case of SLAAC-enabled subnets, however, it's not feasible to
require removal of SLAAC-generated addresses individually from each
associated port before deleting a subnet because of the multicast
nature of RA messages. For SLAAC-enabled subnets, the processing of
subnet delete requests needs to be changed so that these subnets will
be allowed to be deleted, and all ports get disassociated from their
corresponding SLAAC IP address, when there are ports existing
on the SLAAC subnet.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393435

Title:
  Subnet delete for IPv6 SLAAC should not require prior port disassoc

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  With the current Neutron implementation, a subnet cannot be deleted
  until all associated IP addresses have been removed from ports (via
  port update) or the associated ports/VMs have been deleted.   

  In the case of SLAAC-enabled subnets, however, it's not feasible to
  require removal of SLAAC-generated addresses individually from each
  associated port before deleting a subnet because of the multicast
  nature of RA messages. For SLAAC-enabled subnets, the processing of
  subnet delete requests needs to be changed so that these subnets will
  be allowed to be deleted, and all ports get disassociated from their
  corresponding SLAAC IP address, when there are ports existing
  on the SLAAC subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393455] [NEW] Lack of documentation to develop v3 API

2014-11-17 Thread Pasquale Porreca
Public bug reported:

There is the need for a documentation/guideline of the required steps to
develop a new v3 API.

The report of this bug was suggested in this thread:
http://lists.openstack.org/pipermail/openstack-
dev/2014-November/050711.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api compute documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393455

Title:
  Lack of documentation to develop v3 API

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is the need for a documentation/guideline of the required steps
  to develop a new v3 API.

  The report of this bug was suggested in this thread:
  http://lists.openstack.org/pipermail/openstack-
  dev/2014-November/050711.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304208] Re: key_name and key_save fields in Stack Launch don't have tooltips

2014-11-17 Thread Nikunj Aggarwal
** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1304208

Title:
  key_name and key_save fields in Stack Launch don't have tooltips

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  the 2 last fields in the Stack launch screen (key_name and key_save) are 
missing the tooltip, like the other 3 top fields.
  Placing the mouse inside these 2 fields should show the black tooltip

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1304208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284882] Re: ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=35413): Max retries exceeded with url: /v2/images (Caused by class 'socket.error': [Errno 111] ECONNREFU

2014-11-17 Thread Ian Cordasco
** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1284882

Title:
  ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=35413): Max
  retries exceeded with url: /v2/images (Caused by class
  'socket.error': [Errno 111] ECONNREFUSED)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  http://logs.openstack.org/86/74886/7/check/gate-glance-
  python26/5d3577f/

  2014-02-25 08:53:24.477 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  2014-02-25 08:53:24.477 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  2014-02-25 08:53:24.477 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  2014-02-25 08:53:24.477 | ${PYTHON:-python} -m subunit.run discover -t ./ 
./glance/tests  
  2014-02-25 08:53:24.477 | 
==
  2014-02-25 08:53:24.477 | FAIL: 
glance.tests.functional.v2.test_images.TestImages.test_image_visibility_to_different_users
  2014-02-25 08:53:24.477 | tags: worker-0
  2014-02-25 08:53:24.477 | 
--
  2014-02-25 08:53:24.477 | Traceback (most recent call last):
  2014-02-25 08:53:24.478 |   File glance/tests/functional/v2/test_images.py, 
line 1673, in test_image_visibility_to_different_users
  2014-02-25 08:53:24.478 | response = requests.post(path, headers=headers, 
data=data)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/api.py,
 line 88, in post
  2014-02-25 08:53:24.478 | return request('post', url, data=data, **kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/api.py,
 line 44, in request
  2014-02-25 08:53:24.478 | return session.request(method=method, url=url, 
**kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 383, in request
  2014-02-25 08:53:24.478 | resp = self.send(prep, **send_kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 486, in send
  2014-02-25 08:53:24.478 | r = adapter.send(request, **kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/adapters.py,
 line 378, in send
  2014-02-25 08:53:24.478 | raise ConnectionError(e)
  2014-02-25 08:53:24.479 | ConnectionError: 
HTTPConnectionPool(host='127.0.0.1', port=35413): Max retries exceeded with 
url: /v2/images (Caused by class 'socket.error': [Errno 111] ECONNREFUSED)
  2014-02-25 08:53:24.479 | 
==
  2014-02-25 08:53:24.479 | FAIL: process-returncode
  2014-02-25 08:53:24.479 | tags: worker-0
  2014-02-25 08:53:24.479 | 
--
  2014-02-25 08:53:24.479 | Binary content:
  2014-02-25 08:53:24.479 |   traceback (test/plain; charset=utf8)
  2014-02-25 08:53:24.479 | Ran 2194 tests in 779.097s
  2014-02-25 08:53:24.479 | FAILED (id=0, failures=2, skips=33)
  2014-02-25 08:53:24.516 | error: testr failed (1)
  2014-02-25 08:53:24.538 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python26/.tox/py26/bin/python -m 
glance.openstack.common.lockutils python setup.py test --slowest 
--testr-args=--concurrency 1 '
  2014-02-25 08:53:24.538 | ___ summary 

  2014-02-25 08:53:24.538 | ERROR:   py26: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1284882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387506] Re: lb-healthmonitor-show command error

2014-11-17 Thread Eugene Nikanorov
** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided = Low

** Changed in: python-neutronclient
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1387506

Title:
  lb-healthmonitor-show command error

Status in Python client library for Neutron:
  Confirmed

Bug description:
  Usually use 'neutron lb-healthmonitor-show healthmonitor-id' this command to 
get the detail params of monitor.
  I check the detail of healthmonitor when I only have one 
helthmonitor(7cd028b1-d2b1-4347-a470-fd0f7296c30d  only one!), work correctly. 
but, To my suprise, It will return the '7cd028b1-d2b1-4347-a470-fd0f7296c30d' 
healthmonitor details of the default when add incorrect id or any others string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1387506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286550] Re: swiftclient: HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: /v1/AUTH_e7a5e2d8518a466b9300e56870e520ce/glance (Caused by class 'sock

2014-11-17 Thread Ian Cordasco
*** This bug is a duplicate of bug 1284882 ***
https://bugs.launchpad.net/bugs/1284882

** This bug has been marked a duplicate of bug 1284882
   ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=35413): Max 
retries exceeded with url: /v2/images (Caused by class 'socket.error': [Errno 
111] ECONNREFUSED)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1286550

Title:
  swiftclient:  HTTPConnectionPool(host='127.0.0.1', port=8080): Max
  retries exceeded with url:
  /v1/AUTH_e7a5e2d8518a466b9300e56870e520ce/glance (Caused by class
  'socket.error': [Errno 111] ECONNREFUSED)

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  It sometimes occurs.

  http://logs.openstack.org/72/67272/22/gate/gate-tempest-dsvm-neutron-
  pg-isolated/dc713d0/logs/screen-g-api.txt.gz?level=INFO

  2014-03-01 12:09:18.956 2956 ERROR swiftclient 
[4d381517-593f-44ad-89d3-79417f1241f1 2839da5acf8a4af7baadad57e94ec8cb 
0b1be67b35ac46b893eabc44aab06637 - - -] HTTPConnectionPool(host='127.0.0.1', 
port=8080): Max retries exceeded with url: 
/v1/AUTH_e7a5e2d8518a466b9300e56870e520ce/glance (Caused by class 
'socket.error': [Errno 111] ECONNREFUSED)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient Traceback (most recent call 
last):
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/opt/stack/new/python-swiftclient/swiftclient/client.py, line 1189, in _retry
  2014-03-01 12:09:18.956 2956 TRACE swiftclient rv = func(self.url, 
self.token, *args, **kwargs)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/opt/stack/new/python-swiftclient/swiftclient/client.py, line 617, in 
head_container
  2014-03-01 12:09:18.956 2956 TRACE swiftclient conn.request(method, path, 
'', req_headers)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/opt/stack/new/python-swiftclient/swiftclient/client.py, line 188, in request
  2014-03-01 12:09:18.956 2956 TRACE swiftclient files=files, 
**self.requests_args)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/opt/stack/new/python-swiftclient/swiftclient/client.py, line 177, in _request
  2014-03-01 12:09:18.956 2956 TRACE swiftclient return 
requests.request(*arg, **kwarg)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/usr/local/lib/python2.7/dist-packages/requests/api.py, line 44, in request
  2014-03-01 12:09:18.956 2956 TRACE swiftclient return 
session.request(method=method, url=url, **kwargs)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/usr/local/lib/python2.7/dist-packages/requests/sessions.py, line 383, in 
request
  2014-03-01 12:09:18.956 2956 TRACE swiftclient resp = self.send(prep, 
**send_kwargs)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/usr/local/lib/python2.7/dist-packages/requests/sessions.py, line 486, in send
  2014-03-01 12:09:18.956 2956 TRACE swiftclient r = adapter.send(request, 
**kwargs)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient   File 
/usr/local/lib/python2.7/dist-packages/requests/adapters.py, line 378, in send
  2014-03-01 12:09:18.956 2956 TRACE swiftclient raise ConnectionError(e)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient ConnectionError: 
HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: 
/v1/AUTH_e7a5e2d8518a466b9300e56870e520ce/glance (Caused by class 
'socket.error': [Errno 111] ECONNREFUSED)
  2014-03-01 12:09:18.956 2956 TRACE swiftclient 

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSFRUUENvbm5lY3Rpb25Qb29sKGhvc3Q9JzEyNy4wLjAuMScsIHBvcnQ9ODA4MCk6IE1heCByZXRyaWVzIGV4Y2VlZGVkIHdpdGggdXJsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTM2Nzg5OTg4MjB9

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1286550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393491] [NEW] _manual_join_columns should copy columns_to_join rather than mutate it

2014-11-17 Thread Matt Riedemann
Public bug reported:

I noticed this while working on this change:

https://review.openstack.org/#/c/131490/3/

Tempest was failing on the simple tenant usage test because it couldn't
lazy-load system_metadata from the instance object to get the flavor in
the API.

This was because the DB API call was hitting this method:

http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py?id=2014.2#n1808

That modifies the columns_to_join list passed in and removes the
system_metadata entry, so later when we construct the instance object
from the DB record we didn't have 'system_metadata' in expected_attrs,
so we couldn't lazy-load the system_metadata from the instance object
and hit an error.

The _manual_join_columns method should be making a copy of
columns_to_join and modifying that before returning it, it shouldn't
actually modify the columns_to_join list passed in.

** Affects: nova
 Importance: Undecided
 Assignee: Matt Riedemann (mriedem)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393491

Title:
  _manual_join_columns should copy columns_to_join rather than mutate it

Status in OpenStack Compute (Nova):
  New

Bug description:
  I noticed this while working on this change:

  https://review.openstack.org/#/c/131490/3/

  Tempest was failing on the simple tenant usage test because it
  couldn't lazy-load system_metadata from the instance object to get the
  flavor in the API.

  This was because the DB API call was hitting this method:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py?id=2014.2#n1808

  That modifies the columns_to_join list passed in and removes the
  system_metadata entry, so later when we construct the instance object
  from the DB record we didn't have 'system_metadata' in expected_attrs,
  so we couldn't lazy-load the system_metadata from the instance object
  and hit an error.

  The _manual_join_columns method should be making a copy of
  columns_to_join and modifying that before returning it, it shouldn't
  actually modify the columns_to_join list passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1101287] Re: Keystone LDAP does not support v3 Role Grants

2014-11-17 Thread Adam Young
** Changed in: keystone
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1101287

Title:
  Keystone LDAP does not support v3 Role Grants

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Although the current keystone backend does support role grants to user
  and tenants, it is not hooked up to the new v3 role grant api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1101287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393518] [NEW] v3 service catalog returns services without names, but v2.0 api does not

2014-11-17 Thread David J Hu
Public bug reported:

For services without names, it appeared that v2.0 api filtered out these
services (w/o names) from the service catalog.  To the contrary, v3 api
presents these services in the service catalog (w/o names) without
filtering them out, which I think is a bug.

Here the dump...

v3
==

$ curl -s -H Content-Type: application/json -d '{ auth: {identity: 
{methods: [password],password: {user: {name: admin,domain: { 
id: default },password: changemenow}' 
http://192.0.2.27:5000/v3/auth/tokens | python -mjson.tool
{
token: {
audit_ids: [
ZVXOhZIzQ_2TRl3FHXs7uQ
],
catalog: [
{   ==  Service without a name
endpoints: [],
id: 03759dde22f0498f91ab92fe63da7e37,
type: service-test
},
{
endpoints: [
{
id: 89689c655c194cd1bf2894b798d4fe60,
interface: public,
region: regionOne,
region_id: regionOne,
url: http://192.0.2.27:5000/v2.0;
},
{
id: 927e31c648c24e75b5c28e86da249c37,
interface: internal,
region: regionOne,
region_id: regionOne,
url: http://192.0.2.27:5000/v2.0;
},
{
id: d553b1bebbf84efc96507a5fc353df36,
interface: admin,
region: regionOne,
region_id: regionOne,
url: http://192.0.2.27:35357/v2.0;
}
],
id: 1af4a779d5c6444d885521e1697b3cde,
name: keystone,
type: identity
},
{
endpoints: [
{
id: 75db190e2e3f486c86a948947f1256a9,
interface: admin,
region: regionOne,
region_id: regionOne,
url: http://192.0.2.27:8080/v1;
},
{
id: 826f8098256d46e6b89075b63ccb9a3b,
interface: internal,
region: regionOne,
region_id: regionOne,
url: 
http://192.0.2.27:8080/v1/AUTH_edf891223bac4b6ea2a85a15e6ce9cd3;
},
{
id: a9bc3e3ac7c44882aa1f02b0c2940fc8,
interface: public,
region: regionOne,
region_id: regionOne,
url: 
http://192.0.2.27:8080/v1/AUTH_edf891223bac4b6ea2a85a15e6ce9cd3;
}
],
id: 301ec05f525d40aaa41be512f820d19a,
name: swift,
type: object-store
},
{== Another service without a 
name.
endpoints: [],
id: 5e11382bdd854179b5c3c19f848bf64f,
type: service-test
},

* ommited*

v2.0


$ curl -s -H Content-Type: application/json -d '{auth: {tenantName: 
admin, passwordCredentials: {username: admin, password: 
changeyourpasswordnow}}}' http://192.0.2.27:5000/v2.0/tokens | python 
-mjson.tool
{
access: {
metadata: {
is_admin: 0,
roles: [
9fe2ff9ee4384b1894a90878d3e92bab,
ab13600a7b1f4beab319d0b6860db24b
]
},
serviceCatalog: [
{
endpoints: [
{
adminURL: 
http://192.0.2.27:8774/v2/edf891223bac4b6ea2a85a15e6ce9cd3;,
id: 1f999cd0633d409cb477c9343cf747e6,
internalURL: 
http://192.0.2.27:8774/v2/edf891223bac4b6ea2a85a15e6ce9cd3;,
publicURL: 
http://192.0.2.27:8774/v2/edf891223bac4b6ea2a85a15e6ce9cd3;,
region: regionOne
}
],
endpoints_links: [],
name: nova,
type: compute
},
{
endpoints: [
{
adminURL: http://192.0.2.27:9696/;,
id: 9a8c06f2ffa64e49b0ee078d4cf9a1e8,
   internalURL: http://192.0.2.27:9696/;,
publicURL: http://192.0.2.27:9696/;,
region: regionOne
}
],
endpoints_links: [],
   name: neutron,
type: network
},
{
endpoints: [
{
adminURL: 

[Yahoo-eng-team] [Bug 1393527] [NEW] IPv6 Subnets configured to use external router should not be allowed to associate to Neutron Router.

2014-11-17 Thread Sridhar Gaddam
Public bug reported:

IPv6 subnet attributes have various possibilities as described in the following 
BP.
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-radvd-ra.html#rest-api-impact

Currently Neutron allows attaching a subnet (configured to use an external 
router) to Neutron Router. 
Ideally Neutron should not allow this operation and should return an 
appropriate error message to the user.

Please refer to the following thread for more details:
https://review.openstack.org/#/c/134530/2/neutron/db/securitygroups_rpc_base.py

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393527

Title:
  IPv6 Subnets configured to use external router should not be allowed
  to associate to Neutron Router.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  IPv6 subnet attributes have various possibilities as described in the 
following BP.
  
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-radvd-ra.html#rest-api-impact

  Currently Neutron allows attaching a subnet (configured to use an external 
router) to Neutron Router. 
  Ideally Neutron should not allow this operation and should return an 
appropriate error message to the user.

  Please refer to the following thread for more details:
  
https://review.openstack.org/#/c/134530/2/neutron/db/securitygroups_rpc_base.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389985] Re: CLI will fail one time after restarting DB

2014-11-17 Thread Steve Baker
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389985

Title:
  CLI will fail one time after restarting DB

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  Incomplete
Status in OpenStack Compute (Nova):
  Incomplete
Status in Oslo Database library:
  New

Bug description:
  After restarting database, the first command will fail. for example:
  after restarting Database, and wait for a few minutes.
  Then run heat stack-list, result will be like below:

  ERROR: Remote error: DBConnectionError (OperationalError) 
ibm_db_dbi::OperationalError: SQLNumResultCols failed: [IBM][CLI Driver] 
SQL30081N  A communication error has been detected. Communication protocol 
being used: TCP/IP.  Communication API being used: SOCKETS.  Location where 
the error was detected: 10.11.1.14.  Communication function detecting the 
error: send.  Protocol specific error code(s): 2, *, *.  SQLSTATE=08001 
SQLCODE=-30081 'SELECT stack.status_reason AS stack_status_reason, 
stack.created_at AS stack_created_at, stack.deleted_at AS stack_deleted_at, 
stack.action AS stack_action, stack.status AS stack_status, stack.id AS 
stack_id, stack.name AS stack_name, stack.raw_template_id AS 
stack_raw_template_id, stack.username AS stack_username, stack.tenant AS 
stack_tenant, stack.parameters AS stack_parameters, stack.user_creds_id AS 
stack_user_creds_id, stack.owner_id AS stack_owner_id, stack.timeout AS 
stack_timeout, stack.disable_rollback AS stack_disable_rol
 lback, stack.stack_user_project_id AS stack_stack_user_project_id, 
stack.backup AS stack_backup, stack.updated_at AS stack_updated_at \nFROM stack 
\nWHERE stack.deleted_at IS NULL AND stack.owner_id IS NULL AND stack.tenant = 
? ORDER BY stack.created_at DESC, stack.id DESC' 
('a3a14c6f82bd4ce88273822407a0829b',)
  [u'Traceback (most recent call last):\n', u'  File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', u' 
 File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', u'  File 
/usr/lib/python2.6/site-packages/heat/engine/service.py, line 69, in 
wrapped\nreturn func(self, ctx, *args, **kwargs)\n', u'  File 
/usr/lib/python2.6/site-packages/heat/engine/service.py, line 490, in 
list_stacks\nreturn [api.format_stack(stack) for stack in stacks]\n', u'  
File /usr/lib/python2.6/site-packages/heat/engine/stack.py, line 264, in 
load_all\nshow_deleted, show_nested) or []\n', u'  File 
/usr/lib/python2.6/site-packages/heat/db/api.py, li
 ne 130, in stack_get_all\nshow_deleted, show_nested)\n', u'  File 
/usr/lib/python2.6/site-packages/heat/db/sqlalchemy/api.py, line 368, in 
stack_get_all\nmarker, sort_dir, filters).all()\n', u'  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py, line 2241, in 
all\nreturn list(self)\n', u'  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py, line 2353, in 
__iter__\nreturn self._execute_and_instances(context)\n', u'  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py, line 2368, in 
_execute_and_instances\nresult = conn.execute(querycontext.statement, 
self._params)\n', u'  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 662, in 
execute\nparams)\n', u'  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 761, in 
_execute_clauseelement\ncompiled_sql, distilled_params\n', u'  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 874, in 
_execute_c
 ontext\ncontext)\n', u'  File 
/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/compat/handle_error.py, 
line 125, in _handle_dbapi_exception\nsix.reraise(type(newraise), newraise, 
sys.exc_info()[2])\n', u'  File 
/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/compat/handle_error.py, 
line 102, in _handle_dbapi_exception\nper_fn = fn(ctx)\n', u'  File 
/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/exc_filters.py, line 323, 
in handler\ncontext.is_disconnect)\n', u'  File 
/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/exc_filters.py, line 263, 
in _is_db_connection_error\nraise 
exception.DBConnectionError(operational_error)\n', u'DBConnectionError: 
(OperationalError) ibm_db_dbi::OperationalError: SQLNumResultCols failed: 
[IBM][CLI Driver] SQL30081N  A communication error has been detected. 
Communication protocol being used: TCP/IP.  Communication API being used: 
SOCKETS.  Location where 

[Yahoo-eng-team] [Bug 1391691] Re: Add Metadef Tag Support

2014-11-17 Thread Ian Cordasco
Wayne, the appropriate place for blueprints is in the blueprints section
[1]. As it is, I don't see one registered for this topic.

[1]: https://blueprints.launchpad.net/glance

** Changed in: glance
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1391691

Title:
  Add Metadef Tag Support

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  This defect will introduce the first round of support for Metadef
  Tags. Both DB and API CRUD operations for the metadef_tags table will
  be supported.

  The metadef_tags table is:
  CREATE TABLE `metadef_tags` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `namespace_id` int(11) NOT NULL,
  `name` varchar(80) NOT NULL,
  `created_at` timestamp NOT NULL,
  `updated_at` timestamp
  )

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1391691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393248] Re: Can not migrate because of strict hosts check

2014-11-17 Thread Sean Dague
I actually don't think this is a valid bug. We make some assumptions
that the nova environment is setup correctly. Ignoring the security
checks in code is a very bad option.

** Changed in: nova
   Status: In Progress = Won't Fix

** Changed in: nova
   Importance: Low = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393248

Title:
  Can not migrate because of strict hosts check

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When nova migrates VMs it fails to connect to another machine when prompted 
to accept hosts key.
  Nova should run SSH with ignoring this option

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393579] [NEW] subnets table failure_url makes doesn't exist

2014-11-17 Thread David Lyle
Public bug reported:

On the network detail page, the subnets table attempts to redirect to
self.failure_url if the neutron API call fails. The redirect doesn't
make sense and attempts to redirect to the failure_url which is not
defined on the subnets table.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393579

Title:
  subnets table failure_url makes doesn't exist

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the network detail page, the subnets table attempts to redirect to
  self.failure_url if the neutron API call fails. The redirect doesn't
  make sense and attempts to redirect to the failure_url which is not
  defined on the subnets table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393581] [NEW] Horizon DDOS Sahara for job execution in TOBEKILLED state

2014-11-17 Thread Andrew Lazarev
Public bug reported:

I had a number of old job executions and decided to delete them all.
After select all, delete two of job executions moved to TOBEKILLED
state and horizon started to send GET requests about these job
executions in a loop without any delay. Job executions have been deleted
in 5 minutes (graceful cancel timeout). For this period horizon
generated a lot of traffic to Sahara.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

** Tags added: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393581

Title:
  Horizon DDOS Sahara for job execution in TOBEKILLED state

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I had a number of old job executions and decided to delete them all.
  After select all, delete two of job executions moved to TOBEKILLED
  state and horizon started to send GET requests about these job
  executions in a loop without any delay. Job executions have been
  deleted in 5 minutes (graceful cancel timeout). For this period
  horizon generated a lot of traffic to Sahara.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393590] [NEW] network detail tables missing headers

2014-11-17 Thread David Lyle
Public bug reported:

With the new table header suppression patch, the table headers on the
network details pages are being suppressed. Need to be explicitly
enabled.

** Affects: horizon
 Importance: Medium
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393590

Title:
  network detail tables missing headers

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  With the new table header suppression patch, the table headers on the
  network details pages are being suppressed. Need to be explicitly
  enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393589] [NEW] Attaching or detaching an interface to a router causes all VPNaaS daemons to be restarted.

2014-11-17 Thread David Clarke
Public bug reported:

'sync' in services/vpn/device_drivers/ipsec.py is called any time an
interface is attached or detached from a router.  This occurs whether or
not the edited router hosts a VPNaaS instance.

'sync' loops through the results of 'get_vpn_services_on_host' and
stops/starts all IPsec daemons on the network node that hosts the router
being edited, regardless of if they're on the router being edited, or
even the same tenant.

An authorized user can trivially loop through the attach/detach API
calls, causing the IPsec daemons for every tenant to be continuously
restarted.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393589

Title:
  Attaching or detaching an interface to a router causes all VPNaaS
  daemons to be restarted.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  'sync' in services/vpn/device_drivers/ipsec.py is called any time an
  interface is attached or detached from a router.  This occurs whether
  or not the edited router hosts a VPNaaS instance.

  'sync' loops through the results of 'get_vpn_services_on_host' and
  stops/starts all IPsec daemons on the network node that hosts the
  router being edited, regardless of if they're on the router being
  edited, or even the same tenant.

  An authorized user can trivially loop through the attach/detach API
  calls, causing the IPsec daemons for every tenant to be continuously
  restarted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393345] Re: How auto adjust the dimensions of the noVNC window image in horizon

2014-11-17 Thread Gary W. Smith
This appears to be a question on an out-of-support release (at least for
non-security problems), rather than a bug report. I don't believe that
the VNC console will stretch to fit the browser window. Please pose the
question on the relevant IRC channel
(https://wiki.openstack.org/wiki/IRC) to get an answer.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393345

Title:
  How auto adjust the dimensions of the noVNC window image in horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  noVNC console of an Instance in Openstack Icehouse is giving a
  scrollable webinterface,How to auto resize the same to fit the browser
  woindow,Browser is Firefox 31.x

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393165] Re: horizon install apache config debian jessie default site

2014-11-17 Thread Gary W. Smith
This is not an issue with Horizon itself, but in the debian packaging
and/or installation process. Please work with the debian team on this.
See https://wiki.debian.org/OpenStack for more.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393165

Title:
  horizon install apache config debian jessie default site

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  when the installation of horizon finished in debian jessie using

  #apt-get install openstack-dashboard

  
  although during the instalation it is asked to remove 000-default default 
apache2 site from /etc/sites-enabled but after installation it does not work 
and each time I should try :

  #a2dissite 000-default
  or
  #rm /etc/apache2/sites-enabled/000-default.conf

  my debian information:
  #uname -a
  Linux controller 3.16-2-amd64 #1 SMP Debian 3.16.3-2 (2014-09-20) x86_64 
GNU/Linux
  # cat /etc/*version
  jessie/sid

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393408] Re: hardcoded block device path for NBD server

2014-11-17 Thread Abel Lopez
I don't think this is a bug, nor a hardcoded openstack specific value. This is 
how linux implements sysFS. If nbd module is loaded, a sysfs entry will exist 
in /sys/block/nbd0
https://www.kernel.org/pub/linux/kernel/people/mochel/doc/papers/ols-2005/mochel.pdf

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393408

Title:
  hardcoded block device path for NBD server

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Nova uses hardcoded block device path for NBD server.
  See: 
https://github.com/openstack/nova/blob/master/nova/virt/disk/mount/nbd.py#L63

  For example, as deployer i want to pick my own path to NBD storage,
  but it's hardcoded. And not clear (no documentation) where should NBD
  storage be.

  Proper solution is to propose new configuration opt:
  Name: nbd_storage_path
  Type: String
  Default: /sys/blocl/nbd1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393623] [NEW] test_postgresql_opportunistically fails in gate-nova-python27

2014-11-17 Thread Davanum Srinivas (DIMS)
Public bug reported:

log stash query:
There is 1 other session using the database

log stash url:
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIlRoZXJlIGlzIDEgb3RoZXIgc2Vzc2lvbiB1c2luZyB0aGUgZGF0YWJhc2VcIiIsImZpZWxkcyI6WyJidWlsZF9jaGFuZ2UiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDE2MjcyMzU5NzkyfQ==

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393623

Title:
  test_postgresql_opportunistically fails in gate-nova-python27

Status in OpenStack Compute (Nova):
  New

Bug description:
  log stash query:
  There is 1 other session using the database

  log stash url:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIlRoZXJlIGlzIDEgb3RoZXIgc2Vzc2lvbiB1c2luZyB0aGUgZGF0YWJhc2VcIiIsImZpZWxkcyI6WyJidWlsZF9jaGFuZ2UiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDE2MjcyMzU5NzkyfQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393623] Re: test_postgresql_opportunistically fails in gate-nova-python27

2014-11-17 Thread Davanum Srinivas (DIMS)
Looks like it could be from oslo.db, comparing pip freeze output form a
good run and a bad run from https://review.openstack.org/#/c/134332/

good run has oslo.db==1.0.2 and bad run has oslo.db==1.0.2

good run - 
http://logs.openstack.org/32/134332/10/check/gate-nova-python27/25fa889/console.html
bad run - 
http://logs.openstack.org/32/134332/17/check/gate-nova-python27/8e1118c/console.html

** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393623

Title:
  test_postgresql_opportunistically fails in gate-nova-python27

Status in OpenStack Compute (Nova):
  New
Status in Oslo Database library:
  New

Bug description:
  log stash query:
  There is 1 other session using the database

  log stash url:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIlRoZXJlIGlzIDEgb3RoZXIgc2Vzc2lvbiB1c2luZyB0aGUgZGF0YWJhc2VcIiIsImZpZWxkcyI6WyJidWlsZF9jaGFuZ2UiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDE2MjcyMzU5NzkyfQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393633] [NEW] test_postgresql_opportunistically fails with database openstack_citest is being accessed by other users

2014-11-17 Thread Matt Riedemann
Public bug reported:

Looks like this was previously fixed under bug 1328997 but this is back:

http://logs.openstack.org/72/135072/1/check/gate-nova-
python27/ba44ca9/console.html#_2014-11-17_22_51_24_244

2014-11-17 22:51:24.244 | Captured traceback:
2014-11-17 22:51:24.244 | ~~~
2014-11-17 22:51:24.244 | Traceback (most recent call last):
2014-11-17 22:51:24.244 |   File nova/tests/unit/db/test_migrations.py, 
line 138, in test_postgresql_opportunistically
2014-11-17 22:51:24.245 | self._test_postgresql_opportunistically()
2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 429, in _test_postgresql_opportunistically
2014-11-17 22:51:24.245 | self._reset_database(database)
2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 336, in _reset_database
2014-11-17 22:51:24.245 | self._reset_pg(conn_pieces)
2014-11-17 22:51:24.245 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/concurrency/lockutils.py,
 line 311, in inner
2014-11-17 22:51:24.245 | return f(*args, **kwargs)
2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 245, in _reset_pg
2014-11-17 22:51:24.245 | self.execute_cmd(droptable)
2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 228, in execute_cmd
2014-11-17 22:51:24.245 | Failed to run: %s\n%s % (cmd, output))
2014-11-17 22:51:24.246 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 348, in assertEqual
2014-11-17 22:51:24.246 | self.assertThat(observed, matcher, message)
2014-11-17 22:51:24.246 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
2014-11-17 22:51:24.246 | raise mismatch_error
2014-11-17 22:51:24.246 | MismatchError: !=:
2014-11-17 22:51:24.246 | reference = ''
2014-11-17 22:51:24.246 | actual= u'''\
2014-11-17 22:51:24.246 | Unexpected error while running command.
2014-11-17 22:51:24.246 | Command: psql -w -U openstack_citest -h localhost 
-c 'drop database if exists openstack_citest;' -d postgres
2014-11-17 22:51:24.246 | Exit code: 1
2014-11-17 22:51:24.246 | Stdout: u''
2014-11-17 22:51:24.247 | Stderr: u'ERROR:  database openstack_citest is 
being accessed by other users\\nDETAIL:  There is 1 other session using the 
database.\\n\
2014-11-17 22:51:24.247 | : Failed to run: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
2014-11-17 22:51:24.247 | Unexpected error while running command.
2014-11-17 22:51:24.247 | Command: psql -w -U openstack_citest -h localhost 
-c 'drop database if exists openstack_citest;' -d postgres
2014-11-17 22:51:24.247 | Exit code: 1
2014-11-17 22:51:24.247 | Stdout: u''
2014-11-17 22:51:24.247 | Stderr: u'ERROR:  database openstack_citest is 
being accessed by other users\nDETAIL:  There is 1 other session using the 
database.\n'
2014-11-17 22:51:24.247 | Traceback (most recent call last):
2014-11-17 22:51:24.247 | _StringException: Empty attachments:
2014-11-17 22:51:24.247 |   pythonlogging:''
2014-11-17 22:51:24.247 |   stderr
2014-11-17 22:51:24.248 |   stdout


http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ29tbWFuZDogcHNxbCAtdyAtVSBvcGVuc3RhY2tfY2l0ZXN0IC1oIGxvY2FsaG9zdCAtYyAnZHJvcCBkYXRhYmFzZSBpZiBleGlzdHMgb3BlbnN0YWNrX2NpdGVzdDsnIC1kIHBvc3RncmVzXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQxNjI3NTg1MDI4MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

516 hits in 7 days, check and gate, all failures.

** Affects: nova
 Importance: High
 Status: Confirmed

** Affects: oslo.db
 Importance: Undecided
 Status: New


** Tags: db postgresql testing

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393633

Title:
  test_postgresql_opportunistically fails with database
  openstack_citest is being accessed by other users

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  New

Bug description:
  Looks like this was previously fixed under bug 1328997 but this is
  back:

  http://logs.openstack.org/72/135072/1/check/gate-nova-
  python27/ba44ca9/console.html#_2014-11-17_22_51_24_244

  2014-11-17 22:51:24.244 | Captured traceback:
  2014-11-17 22:51:24.244 | ~~~
  2014-11-17 22:51:24.244 | Traceback 

[Yahoo-eng-team] [Bug 1393633] Re: test_postgresql_opportunistically fails with database openstack_citest is being accessed by other users

2014-11-17 Thread Matt Riedemann
This showed up again on 11/17 and oslo.db 1.1.0 was just released today:

https://pypi.python.org/pypi/oslo.db/1.1.0

** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393633

Title:
  test_postgresql_opportunistically fails with database
  openstack_citest is being accessed by other users

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  New

Bug description:
  Looks like this was previously fixed under bug 1328997 but this is
  back:

  http://logs.openstack.org/72/135072/1/check/gate-nova-
  python27/ba44ca9/console.html#_2014-11-17_22_51_24_244

  2014-11-17 22:51:24.244 | Captured traceback:
  2014-11-17 22:51:24.244 | ~~~
  2014-11-17 22:51:24.244 | Traceback (most recent call last):
  2014-11-17 22:51:24.244 |   File nova/tests/unit/db/test_migrations.py, 
line 138, in test_postgresql_opportunistically
  2014-11-17 22:51:24.245 | self._test_postgresql_opportunistically()
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 429, in _test_postgresql_opportunistically
  2014-11-17 22:51:24.245 | self._reset_database(database)
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 336, in _reset_database
  2014-11-17 22:51:24.245 | self._reset_pg(conn_pieces)
  2014-11-17 22:51:24.245 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/concurrency/lockutils.py,
 line 311, in inner
  2014-11-17 22:51:24.245 | return f(*args, **kwargs)
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 245, in _reset_pg
  2014-11-17 22:51:24.245 | self.execute_cmd(droptable)
  2014-11-17 22:51:24.245 |   File nova/tests/unit/db/test_migrations.py, 
line 228, in execute_cmd
  2014-11-17 22:51:24.245 | Failed to run: %s\n%s % (cmd, output))
  2014-11-17 22:51:24.246 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 348, in assertEqual
  2014-11-17 22:51:24.246 | self.assertThat(observed, matcher, message)
  2014-11-17 22:51:24.246 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  2014-11-17 22:51:24.246 | raise mismatch_error
  2014-11-17 22:51:24.246 | MismatchError: !=:
  2014-11-17 22:51:24.246 | reference = ''
  2014-11-17 22:51:24.246 | actual= u'''\
  2014-11-17 22:51:24.246 | Unexpected error while running command.
  2014-11-17 22:51:24.246 | Command: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
  2014-11-17 22:51:24.246 | Exit code: 1
  2014-11-17 22:51:24.246 | Stdout: u''
  2014-11-17 22:51:24.247 | Stderr: u'ERROR:  database openstack_citest 
is being accessed by other users\\nDETAIL:  There is 1 other session using the 
database.\\n\
  2014-11-17 22:51:24.247 | : Failed to run: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
  2014-11-17 22:51:24.247 | Unexpected error while running command.
  2014-11-17 22:51:24.247 | Command: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d postgres
  2014-11-17 22:51:24.247 | Exit code: 1
  2014-11-17 22:51:24.247 | Stdout: u''
  2014-11-17 22:51:24.247 | Stderr: u'ERROR:  database openstack_citest 
is being accessed by other users\nDETAIL:  There is 1 other session using the 
database.\n'
  2014-11-17 22:51:24.247 | Traceback (most recent call last):
  2014-11-17 22:51:24.247 | _StringException: Empty attachments:
  2014-11-17 22:51:24.247 |   pythonlogging:''
  2014-11-17 22:51:24.247 |   stderr
  2014-11-17 22:51:24.248 |   stdout

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ29tbWFuZDogcHNxbCAtdyAtVSBvcGVuc3RhY2tfY2l0ZXN0IC1oIGxvY2FsaG9zdCAtYyAnZHJvcCBkYXRhYmFzZSBpZiBleGlzdHMgb3BlbnN0YWNrX2NpdGVzdDsnIC1kIHBvc3RncmVzXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQxNjI3NTg1MDI4MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  516 hits in 7 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352808] Re: when enable_snat is updated from false to true in a DVR vm fails to reach external network using snat

2014-11-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352808

Title:
  when enable_snat is updated from false to true in a DVR vm fails to
  reach external network using snat

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  1.create network,subnet,dvr perform router interface add
  2.boot a vm,
  3.attach router gateway with disable-snat
  4.now update router with enable-snat=true

  update happens however we see that a extra sg port is created in snat
  namespace for the interface added on router update which causes
  external network unreachability for vm ,(here iptable rules are added
  for the vm on router update)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370521] Re: allow creating your own user role from horizon

2014-11-17 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370521

Title:
  allow creating your own user role from horizon

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I think it would be helpful to allow creating a new role with
  personalized permissions (storage access or network access and so on).

  we have the keystone role-create command but if we can create the role
  from UI with specific permissions to allow a user to create their own
  user it would be a great addition to Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393655] [NEW] An unknown exception Unable to connect to Neutro occurred when clicked panel Instances in openstack dashboard

2014-11-17 Thread onelab
Public bug reported:

An unknown exception Unable to connect to Neutro occurred when clicked
panel Instances in openstack dashboard

We created multi instances and each instance take several ports, then
click the panel Instances in Openstack dashborad, the exception
Unable to connect to Neutro occurred.

Please see below log:

2014-11-18 01:52:26,895 49960 ERROR neutronclient.v2_0.client tonyaw 
self.httpclient.endpoint_url = http://135.252.135.193:9696/ 
action=/v2.0/floatingips.json?port_id=02657989-8557-453c-9c46-1ac31997e885port_id=03796381-2327-442f-bd03-3cdb010ea2fcport_id=040f7540-d69a-4321-88d5-fb8a395384ddport_id=04909a94-951b-4849-bb32-2e4eef7741c4port_id=05cfd928-f247-43ea-83ac-78b3a700cc55port_id=086f2d3e-3645-49c7-a60b-a91c09a904ccport_id=09d15de1-7443-479a-ab5d-134882346327port_id=0a4e82fe-2082-414f-a730-bb961b0a3635port_id=0bf69eb1-d3eb-4898-8349-23ed8817dbd3port_id=0d4a0619-29e6-48fe-b3e0-fdeff916fb37port_id=0ec9a2f5-fb4c-4082-a1c5-64323e1b1128port_id=1093b544-ef33-461d-87f0-03a679bfa8c0port_id=12b25c6e-06da-43f4-9eaa-d8e421d7a0a4port_id=13144a2f-b1b6-45d2-a2fc-a3c1933c3e4bport_id=14b6ddbf-bee2-4b19-832d-ffb35d99f723port_id=16cd469d-8387-4092-abd7-e0e6c09574fdport_id=180611a5-7007-4fb4-bb85-fa48f9f200fbport_id=1ba0e5bd-a3a1-4a83-9c9f-f30b1207c1b1port_id=1cb8fe1d-ee20-4e80-81
 
12-fa865e1de8c1port_id=1d1c4e12-5bf7-4394-97bc-e9546eb2afe0port_id=2190c806-c8af-4f73-b30c-3c95b63c4c15port_id=22af9762-ce71-4d0d-a1d8-f02f74568ebeport_id=26d23e40-8aaf-4c16-9fbe-56cebcd0e42bport_id=26d7302b-6d06-484b-8f7f-566df1eb1256port_id=27928f11-ff87-492a-8a83-5a6d74c8dbc8port_id=28b2ff11-d370-48df-97c3-445553f0f92bport_id=293c509d-a674-45f9-953e-fc5dc0f06d2cport_id=2b350171-def3-426e-b908-7f0de8181acbport_id=2b7bea25-1b04-4209-b93d-d2a385f4a053port_id=2ced7f77-6534-4118-910a-e316e53d77e3port_id=2e8aea59-8a8c-4d8e-93b7-a3e8a6d36c30port_id=2e9b8c9f-a117-4492-849a-9f460fc303e7port_id=32d40914-183c-4786-9d51-4e7f33680d07port_id=34a9fbae-1eb0-491f-9328-9359ff9406ddport_id=35ed6652-4cef-496c-88e7-fdf0a3d1306aport_id=36fd0d42-eaa3-4866-9709-515543342e7fport_id=37f104c1-8753-436d-903b-2e15368ded73port_id=38d3a289-ff42-411e-adcb-1d4b5d8f2391port_id=3ac160ca-4974-42f3-b907-79c6d73291ddport_id=3b8694f3-b922-4eb8-b953-f89cd60dacc4port_id=3bf4ebee-e7de-4886-8964-1b11
 
8e5cd273port_id=3c0d4225-6544-47c8-8daf-e93adb47d321port_id=3ce94a32-bfe6-4f64-b454-bbb8f6f2f046port_id=3e22a4d1-1f69-44bf-857d-8e5635fec812port_id=3f7fa668-b2bd-47e2-a239-e9423f09c831port_id=3fe5e570-99fc-4a54-b792-5f1818f4ebdbport_id=411e80d4-6990-42a3-a80b-ef2587ae295eport_id=41d8c320-bfaa-4dd6-85f2-df72e40246dbport_id=41ea116a-73ce-460f-8b5d-f1aa2379ca95port_id=436c18e2-0476-4f1d-8001-7f4a899ad99aport_id=449b54fc-bbaf-46d8-b30e-6191bc8a7b7aport_id=485a1f0d-b6be-4db2-bae5-aa5e6765d5c9port_id=4b4bb9c1-4b1b-4457-9ff0-b5680a4ad97bport_id=4b773646-7b5b-493a-81f2-c490d375198cport_id=4f3849d8-5c04-4332-b3e7-ff752a9a4edbport_id=51fa75b6-3e26-49a9-8e23-011c39e309c2port_id=526e81ac-dd08-4d21-8265-5b7292a8253bport_id=54e8c58b-e371-4fc6-a546-e3b5f20396fcport_id=552b7902-23b2-4493-9769-44254a0dad00port_id=5579b248-821a-4fd3-b345-39df87899ec9port_id=57af57b1-2a78-4191-9deb-915d3cdaaa29port_id=5838ed8e-ada7-43b5-bd24-4d3765fc26a4port_id=58b7db17-854a-4fc3-9c08-7b03a4c6909
 
aport_id=595006b2-38d3-49a3-8565-c91a1fb68507port_id=59720fdb-bedc-45e5-9b93-39d70c588e93port_id=5a47ff56-32a1-4760-acfc-a477da43451bport_id=5ac68221-6744-4e8a-ac99-9a440fdefc3cport_id=5b86cbea-5a5d-4ad3-9e7c-db7aa20e5632port_id=5d71487a-74ff-409c-8640-7e2721f0c6d1port_id=5e7c2449-d705-4d4b-a1f8-ec774d049ba3port_id=604f4a83-0441-4a1d-a5e6-30722bfdec3aport_id=619244c4-5464-4d62-89d2-1cb71ca5b7d3port_id=6327b454-5618-4edb-b737-2ae93f43c4bcport_id=63cff1f8-91bd-4546-bf70-51ce6296cfe1port_id=63f4fcb1-e47f-4551-9b93-c2d2fd25db8aport_id=63f9fcd4-c23a-4f82-8b8d-2be6ed456044port_id=6416f673-27bd-44a0-ab55-56b282086decport_id=64a229cc-aaba-401b-97c3-e8e6eb801c8aport_id=64dd8e3c-d79d-4acb-9cb3-fc765511bb75port_id=64ee794f-9870-4aac-9874-fc4374423469port_id=66389f51-4949-4f52-b515-73912b71c1d0port_id=669e5be3-4606-4a62-ab76-6745c9b41ee9port_id=66cfaded-713f-4fb9-aad9-6c1aee6258c2port_id=67b08bf6-0d48-466b-89be-f3f7fd10991aport_id=68291f99-3e24-4a43-8eb9-0e2b416df155port_
 

[Yahoo-eng-team] [Bug 1393659] [NEW] Project list not updated after new project created

2014-11-17 Thread Thomas Bechtold
Public bug reported:

When I create a new project via the dashboard, thenew project is not
shown in the project list in the header bar. I have to logout/login to
get the new project listed.

Steps to reproduce:

1) Create new project under http://dashboard/identity/create
2) Try to select the project in the headerbar - project not listed
3) logout/login
4) Try to select the project in the headerbar - project listed now

This happens with the Juno version.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

- When I create a new project via the dashboard, thenew project is not shown in 
the project list in the header bar. I have to logout/login to get the new 
project listed.
+ When I create a new project via the dashboard, thenew project is not
+ shown in the project list in the header bar. I have to logout/login to
+ get the new project listed.
+ 
+ Steps to reproduce:
+ 
+ 1) Create new project under http://dashboard/identity/create
+ 2) Try to select the project in the headerbar - project not listed
+ 3) logout/login
+ 4) Try to select the project in the headerbar - project listed now
+ 
  This happens with the Juno version.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1393659

Title:
  Project list not updated after new project created

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I create a new project via the dashboard, thenew project is not
  shown in the project list in the header bar. I have to logout/login to
  get the new project listed.

  Steps to reproduce:

  1) Create new project under http://dashboard/identity/create
  2) Try to select the project in the headerbar - project not listed
  3) logout/login
  4) Try to select the project in the headerbar - project listed now

  This happens with the Juno version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1393659/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393026] Re: Synchronized decorator is not reentrant

2014-11-17 Thread Eugene Nikanorov
I don't think we need to make lock be reentrant. It's better to avoid
the case when such capability is needed.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393026

Title:
  Synchronized decorator is not reentrant

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  It's impossible to call a synchronized method from the synchronized
  method with the same lock name or use any kind of recursion. The
  following test case hangs forever:

  class TestLockUtils(base.BaseTestCase):
  
  def test_synchronized(self):
  
  @utils.synchronized('ab_test')
  def a():
  b()
  
  @utils.synchronized('ab_test')
  def b():
  pass
  
  @utils.synchronized('c_test')
  def c(n):
  if n == 0:
  return
  c(n -1)

  a()
  c(5)

  The same logic works fine in Java:

  public class JavaSynch {
  public synchronized void a() {
  b();
  }

  public synchronized void b() {
  }

  public synchronized void c(int n) {
  if (n == 0)
  return;
  c(n--);
  }

  public static void main(String[] args) {
  JavaSynch js = new JavaSynch();
  js.a();
  }
  }

  Synchronized decorator from neutron.openstack.common.lockutils uses
  semaphore which can't track the current owner of the lock. It would be
  good to use RLock, but I understand that eventless greenthreads  must
  be taken into account.

  Currently this behavior stop fixing us
  https://bugs.launchpad.net/neutron/+bug/1197176 because delete_network
  should call delete_subnet, but these methods are synchronized using
  the same lock in BigSwithc plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337216] Re: Table 'agents' is missing for bigswitch plugin

2014-11-17 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337216

Title:
  Table 'agents' is missing for bigswitch plugin

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Running migrations for Bigswitch plugin got an error
  http://paste.openstack.org/show/85380/. For creating table
  'networkdhcpagentbindings'  table 'agents' is needed to exist, but
  Bigswitch plugin was not added to the migration_for_plugins list in
  migration 511471cc46b_agent_ext_model_supp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393691] [NEW] test_migration needs some refactor due to oslo.db 1.1.0 release

2014-11-17 Thread Ann Kamyshnikova
Public bug reported:

test_migration contains some methods like overriding
compare_server_default and compare_foreign_keys that was added directly
in ModelsMigrationsSync class in oslo.db only in 1.1.0.

Also as now migrations refactoring is fully finished,  there is no need
to have separate classes for each plugin.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: unittest

** Changed in: neutron
 Assignee: (unassigned) = Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393691

Title:
  test_migration needs some refactor due to oslo.db 1.1.0 release

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  test_migration contains some methods like overriding
  compare_server_default and compare_foreign_keys that was added
  directly in ModelsMigrationsSync class in oslo.db only in 1.1.0.

  Also as now migrations refactoring is fully finished,  there is no
  need to have separate classes for each plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp