[Yahoo-eng-team] [Bug 1545584] Re: OVN devstack: Network creation fails when a VM with provider and private network interface is activatied

2016-02-14 Thread Martin Hickey
** Project changed: neutron => networking-ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1545584

Title:
  OVN devstack: Network creation fails when a VM with provider and
  private network interface is activatied

Status in networking-ovn:
  New

Bug description:
  We have a 5 node OVN devstack installation. We have created networks,
  subnets, routers and activated VMs on private network.  Then added
  provider network and activated VMs with both private and provider
  network interface.  In this devstack implementation we also started
  two ovsdb servers one with 6640 port and another with 6641.  OVSDB
  6641 connects to OVN contorller plug-in.

  When a VM with both private and provider interface is activated,   I
  see Internal server error,  neutron server log shows connection lost
  in the middle of a mysql operation.

  Rally benchmark is enhanced to activate a VM with both network
  interfaces.

  Rally errors: 
  016-02-12 13:46:36.403 28528 DEBUG neutronclient.client [-] RESP: 500 
{'Date': 'Fri, 12 Feb 2016 19:46:36 GMT', 'Connection': 'keep-alive', 
'Content-Type':
   'application/json; charset=UTF-8', 'Content-Length': '150', 
'X-Openstack-Request-Id': 'req-a5d49508-8501-4802-a46b-674d36a46d23'} 
{"NeutronError": {"message": 
   "Request Failed: internal server error while processing your request.", 
"type": "HTTPInternalServerError", "detail": ""}} http_log_resp 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:146
  2016-02-12 13:46:36.403 28528 DEBUG neutronclient.v2_0.client [-] Error 
message: {"NeutronError": {"message": "Request Failed: internal server error 
  while processing your request.", "type": "HTTPInternalServerError", "detail": 
""}} _handle_fault_response 
  /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:176
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner [-] Request Failed: 
internal server error while processing your request.
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner Traceback (most recent 
call last):
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/rally/task/runner.py", line 64, in 
_run_scenario_once
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner 
method_name)(**kwargs) or scenario_output
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/home/stack/sahil/OVN/rally_runs/cnps_ovn.py", line 100, in 
boot_server_overlay_network
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner 
self.wait_for_dhcp_port_up()
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/home/stack/sahil/OVN/rally_runs/cnps_ovn.py", line 200, in 
wait_for_dhcp_port_up
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner dhcp_port_id = 
self._get_dhcp_port(network_id, poll_count=poll_count)["id"]
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/rally/cnp/cnp_base_scenario.py", line 510, in 
_get_dhcp_port
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner 
device_owner=device_owner)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner ret = 
self.function(instance, *args, **kwargs)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 547, in 
list_ports
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner **_params)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 307, in 
list
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner for r in 
self._pagination(collection, path, **params):
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 320, in 
_pagination
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner res = 
self.get(path, params=params)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 293, in 
get
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in 
retry_request
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner res = 
self.get(path, params=params)
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 293, in 
get
  2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)

[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279134
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=22cb2e81a6d5dfad022fae5fca7601e2ae9aab88
Submitter: Jenkins
Branch:master

commit 22cb2e81a6d5dfad022fae5fca7601e2ae9aab88
Author: Javeme 
Date:   Thu Feb 11 20:09:41 2016 +0800

Don't use Mock.called_once_with that does not exist

class mock.Mock does not exist method called_once_with, it just exists
method assert_called_once_with. Currently there are still ome places
where we use called_once_with method, this patch let's correct it.

NOTE: called_once_with() does nothing because it's a mock object.

Closes-Bug: #1544522
Change-Id: Iac7c029a1cc66439f43d441bc6d0832686536961


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  Fix Released
Status in neutron:
  Confirmed
Status in Sahara:
  Fix Released

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545584] [NEW] OVN devstack: Network creation fails when a VM with provider and private network interface is activatied

2016-02-14 Thread Mala Anand
Public bug reported:

We have a 5 node OVN devstack installation. We have created networks,
subnets, routers and activated VMs on private network.  Then added
provider network and activated VMs with both private and provider
network interface.  In this devstack implementation we also started two
ovsdb servers one with 6640 port and another with 6641.  OVSDB 6641
connects to OVN contorller plug-in.

When a VM with both private and provider interface is activated,   I see
Internal server error,  neutron server log shows connection lost in the
middle of a mysql operation.

Rally benchmark is enhanced to activate a VM with both network
interfaces.

Rally errors: 
016-02-12 13:46:36.403 28528 DEBUG neutronclient.client [-] RESP: 500 {'Date': 
'Fri, 12 Feb 2016 19:46:36 GMT', 'Connection': 'keep-alive', 'Content-Type':
 'application/json; charset=UTF-8', 'Content-Length': '150', 
'X-Openstack-Request-Id': 'req-a5d49508-8501-4802-a46b-674d36a46d23'} 
{"NeutronError": {"message": 
 "Request Failed: internal server error while processing your request.", 
"type": "HTTPInternalServerError", "detail": ""}} http_log_resp 
/usr/lib/python2.7/site-packages/neutronclient/common/utils.py:146
2016-02-12 13:46:36.403 28528 DEBUG neutronclient.v2_0.client [-] Error 
message: {"NeutronError": {"message": "Request Failed: internal server error 
while processing your request.", "type": "HTTPInternalServerError", "detail": 
""}} _handle_fault_response 
/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py:176
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner [-] Request Failed: 
internal server error while processing your request.
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner Traceback (most recent 
call last):
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/rally/task/runner.py", line 64, in 
_run_scenario_once
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner 
method_name)(**kwargs) or scenario_output
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/home/stack/sahil/OVN/rally_runs/cnps_ovn.py", line 100, in 
boot_server_overlay_network
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner 
self.wait_for_dhcp_port_up()
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/home/stack/sahil/OVN/rally_runs/cnps_ovn.py", line 200, in 
wait_for_dhcp_port_up
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner dhcp_port_id = 
self._get_dhcp_port(network_id, poll_count=poll_count)["id"]
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/rally/cnp/cnp_base_scenario.py", line 510, in 
_get_dhcp_port
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner 
device_owner=device_owner)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner ret = 
self.function(instance, *args, **kwargs)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 547, in 
list_ports
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner **_params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 307, in 
list
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner for r in 
self._pagination(collection, path, **params):
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 320, in 
_pagination
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner res = self.get(path, 
params=params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 293, in 
get
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in 
retry_request
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner res = self.get(path, 
params=params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 293, in 
get
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in 
retry_request
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner headers=headers, 
params=params)
2016-02-12 13:46:36.405 28528 ERROR rally.task.runner   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 211, in 
do_request
2016-02-12 13:46:36.405 28528 ERROR 

[Yahoo-eng-team] [Bug 1401728] Re: Routing updates lost when multiple IPs attached to router

2016-02-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401728

Title:
  Routing updates lost when multiple IPs attached to router

Status in neutron:
  Expired

Bug description:
  When attempting to run dual stacked networking at the gate
  (https://review.openstack.org/#/c/140128/), IPv4 networking breaks,
  with Tempest scenarios reporting no route to host errors for the
  floating IPs that tempest attempts to SSH into.

  The following errors are reported in the l3 agent log:

  2014-12-11 23:19:58.393 25977 ERROR neutron.agent.l3.agent [-] Ignoring 
multiple IPs on router port db0953d3-4bd1-4106-9efc-c16cd9a3e922
  2014-12-11 23:19:58.393 25977 ERROR neutron.agent.l3.agent [-] 'subnet'
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 341, in call
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 646, in process_router
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent 
self._set_subnet_info(ex_gw_port)
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 562, in 
_set_subnet_info
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent prefixlen = 
netaddr.IPNetwork(port['subnet']['cidr']).prefixlen
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent KeyError: 'subnet'
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 
82, in _spawn_n_impl
  func(*args, **kwargs)
    File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 1537, in 
_process_router_update
  self._process_router_if_compatible(router)
    File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 1512, in 
_process_router_if_compatible
  self.process_router(ri)
    File "/opt/stack/new/neutron/neutron/common/utils.py", line 344, in call
  self.logger(e)
    File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 
82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
    File "/opt/stack/new/neutron/neutron/common/utils.py", line 341, in call
  return func(*args, **kwargs)
    File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 646, in 
process_router
  self._set_subnet_info(ex_gw_port)
    File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 562, in 
_set_subnet_info
  prefixlen = netaddr.IPNetwork(port['subnet']['cidr']).prefixlen
  KeyError: 'subnet'

  http://logs.openstack.org/28/140128/4/check/check-tempest-dsvm-
  neutron-full/440ec4e/logs/screen-q-l3.txt.gz

  Tempest reports no route to host:

  2014-12-11 22:57:04.385 30680 WARNING tempest.common.ssh [-] Failed to
  establish authenticated ssh connection to cirros@172.24.4.82 ([Errno
  113] No route to host). Number attempts: 1. Retry after 2 seconds.

  http://logs.openstack.org/28/140128/4/check/check-tempest-dsvm-
  neutron-full/440ec4e/logs/tempest.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515239] Re: Block live migration fails when vm is being used

2016-02-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515239

Title:
  Block live migration fails when vm is being used

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Block live migration unexpectedly fails when the vm to be migrated has
  some memory usage.

  This error occurred to me with a Sahara cluster. The steps are:

  1 - Create a cluster
  2 - Migrating one VM of this idle cluster is ok
  3 - Launch a wordcount job on the cluster
  4 - From a given time of the job (where the ram is dirtier), trying to 
migrate this same VM fails.

  My problem occurred with a specific job, but I think it may occur in
  any memory-bound process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1515239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537667] Re: Unable to retrive details for network when it has subnets created by admin

2016-02-14 Thread Bao Fangyan
hi,@Itxaka Serrano,I can't reproduce it for now as well,maybe it's my
devstack environment problem,thanks for your comments,I'll change status
to invalid.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1537667

Title:
  Unable to retrive details for network when it has subnets created by
  admin

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Unable to retrive details for network or edit network when the network
  has subnets created by admin,but can do this by CLI.

  Steps to reproduce:
  1. demo tenant create a network net1
  2. demo tenant create a subnet sn1 in net1
  3. admin create a subnet sn2 in net1
  4. Go to demo -> Networks
  5. Click on network net1 or click on edit network button for net1. 

  expected: could retrive details for net1 or update net1.
  observed: ERROR message “Error: Unable to retrieve details for network 
"71de5613-1238-4de6-8cbb-063e6f698bd7".” or "Error: Unable to retrieve network 
details."

  However,it can be done by neutron net-show net1 or neutron net-update
  net1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1537667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279126
Committed: 
https://git.openstack.org/cgit/openstack/sahara/commit/?id=7d01fabc5e16de0363afeb55c07fde889eb38a32
Submitter: Jenkins
Branch:master

commit 7d01fabc5e16de0363afeb55c07fde889eb38a32
Author: Javeme 
Date:   Thu Feb 11 19:19:12 2016 +0800

Don't use Mock.called_once_with that does not exist

class mock.Mock does not exist method called_once_with, it just exists
method assert_called_once_with. Currently there are still ome places
where we use called_once_with method, this patch let's correct it.

NOTE: called_once_with() does nothing because it's a mock object.

Closes-Bug: #1544522
Change-Id: I5698a724bc030b838faa06330a0d3dc77cc0d07a


** Changed in: sahara
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  In Progress
Status in neutron:
  Confirmed
Status in Sahara:
  Fix Released

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545455] [NEW] devstack issue: OSError: [Errno 2] No such file or directory: '/opt/stack/horizon/openstack_dashboard/themes/webroot'

2016-02-14 Thread kaishen
Public bug reported:

hi Team,

i am new to Openstack, i am trying with Devstack. I am following the guide 
lines to setup devstack in one Virtual box machine (Ubuntu 
14.04.3-server-amd64). when i run ./stack.sh, I always get error from console 
like following:
--
2016-02-14 13:41:15.957 | Starting Horizon
+ /opt/devstack/lib/horizon:init_horizon:L141:   sudo rm -f 
'/var/log/apache2/horizon_*'
+ /opt/devstack/lib/horizon:init_horizon:L144:   local django_admin
+ /opt/devstack/lib/horizon:init_horizon:L145:   type -p django-admin
+ /opt/devstack/lib/horizon:init_horizon:L146:   django_admin=django-admin
+ /opt/devstack/lib/horizon:init_horizon:L152:   
DJANGO_SETTINGS_MODULE=openstack_dashboard.settings
+ /opt/devstack/lib/horizon:init_horizon:L152:   django-admin collectstatic 
--noinput
Traceback (most recent call last):
  File "/usr/local/bin/django-admin", line 11, in 
sys.exit(execute_from_command_line())
  File 
"/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 354, in execute_from_command_line
utility.execute()
  File 
"/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 346, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", 
line 394, in run_from_argv
self.execute(*args, **cmd_options)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", 
line 445, in execute
output = self.handle(*args, **options)
  File 
"/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py",
 line 168, in handle
collected = self.collect()
  File 
"/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py",
 line 98, in collect
for path, storage in finder.list(self.ignore_patterns):
  File 
"/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/finders.py", 
line 112, in list
for path in utils.get_files(storage, ignore_patterns):
  File 
"/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", 
line 28, in get_files
directories, files = storage.listdir(location)
  File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", 
line 299, in listdir
for entry in os.listdir(path):
OSError: [Errno 2] No such file or directory: 
'/opt/stack/horizon/openstack_dashboard/themes/webroot'
+ /opt/devstack/lib/horizon:init_horizon:L1:   exit_trap
+ ./stack.sh:exit_trap:L474:   local r=1
++ ./stack.sh:exit_trap:L475:   jobs -p
+ ./stack.sh:exit_trap:L475:   jobs=
+ ./stack.sh:exit_trap:L478:   [[ -n '' ]]
+ ./stack.sh:exit_trap:L484:   kill_spinner
+ ./stack.sh:kill_spinner:L370:   '[' '!' -z '' ']'
+ ./stack.sh:exit_trap:L486:   [[ 1 -ne 0 ]]
+ ./stack.sh:exit_trap:L487:   echo 'Error on exit'
Error on exit
+ ./stack.sh:exit_trap:L488:   generate-subunit 1455456924 353 fail
+ ./stack.sh:exit_trap:L489:   [[ -z /opt/stack/logs ]]
+ ./stack.sh:exit_trap:L492:   /opt/devstack/tools/worlddump.py -d 
/opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2016-02-14-134118.txt for details
+ ./stack.sh:exit_trap:L498:   exit 1
-


Even this, I am trying to login the dashboard page, 
http://127.0.0.1:80/dashboard/auth/login/?next=/dashboard/, which always give 
error page like below:
--
OfflineGenerationError at /auth/login/
You have offline compression enabled but key "6ce96d869b0874d9ca5eb30463383d77" 
is missing from offline manifest. You may need to run "python manage.py 
compress".
Request Method: GET
Request URL:http://127.0.0.1:9080/dashboard/auth/login/?next=/dashboard/
Django Version: 1.8.9
Exception Type: OfflineGenerationError
Exception Value:
You have offline compression enabled but key "6ce96d869b0874d9ca5eb30463383d77" 
is missing from offline manifest. You may need to run "python manage.py 
compress".
Exception Location: 
/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py in 
render_offline, line 71
Python Executable:  /usr/bin/python
Python Version: 2.7.6
Python Path:
['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
 '/opt/stack/keystone',
 '/opt/stack/glance',
 '/opt/stack/cinder',
 '/opt/stack/nova',
 '/opt/stack/horizon',
 '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-x86_64-linux-gnu',
 '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old',
 '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages',
 '/opt/stack/horizon/openstack_dashboard']
Server time:星期日, 14 二月 2016 13:55:11 +
Error during template rendering

In template 

[Yahoo-eng-team] [Bug 1545370] Re: pycryptodome breaks nova/barbican/glance/kite

2016-02-14 Thread Davanum Srinivas (DIMS)
Here's the evidence:
Barbican - https://review.openstack.org/#/c/279977/
Glance - https://review.openstack.org/#/c/279979/
Kite - https://review.openstack.org/#/c/279984/
Kite client - https://review.openstack.org/#/c/279985/


** Summary changed:

- pycryptodome breaks nova 
+ pycryptodome breaks nova/barbican/glance/kite

** Also affects: barbican
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545370

Title:
  pycryptodome breaks nova/barbican/glance/kite

Status in Barbican:
  New
Status in Glance:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  pysaml2===4.0.3 drags in pycryptodome===3.4 which breaks Nova in the
  both unit tests and grenade.

  nova.tests.unit.test_crypto.KeyPairTest.test_generate_key_pair_1024_bits
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_crypto.py", line 352, in 
test_generate_key_pair_1024_bits
  (private_key, public_key, fingerprint) = 
crypto.generate_key_pair(bits)
File "nova/crypto.py", line 165, in generate_key_pair
  key = paramiko.RSAKey.generate(bits)
File 
"/Users/dims/openstack/openstack/nova/.tox/py27/lib/python2.7/site-packages/paramiko/rsakey.py",
 line 146, in generate
  rsa = RSA.generate(bits, os.urandom, progress_func)
File 
"/Users/dims/openstack/openstack/nova/.tox/py27/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py",
 line 436, in generate
  if e % 2 == 0 or e < 3:
  TypeError: unsupported operand type(s) for %: 'NoneType' and 'int'

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1545370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514728] Re: insufficient service name for external process

2016-02-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/243448
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c000ae3c1e1c9f0e096fd1bc5bd44305a752ade2
Submitter: Jenkins
Branch:master

commit c000ae3c1e1c9f0e096fd1bc5bd44305a752ade2
Author: LIU Yulong 
Date:   Tue Nov 10 16:27:04 2015 +0800

Correct insufficient name for external process in manager log

Let's just make the log more precise.

But remember the inconsitent use of the ProcessManager:
The following external processes's ProcessManager
did not set the 'service' param:
1. HA router IP monitor
2. DHCP dnsmasq
3. keepalived(vrrp)
4. metadata-proxy

The following set:
1. dibbler
2. radvd

Change-Id: I93b742ff1e52f15e5541ef3e7c8e844c70e8dd6c
Closes-Bug: #1514728


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514728

Title:
  insufficient service name for external process

Status in neutron:
  Fix Released

Bug description:
  The following external process monitor has insufficient name for external 
process manager:
  1. HA router IP
  2. DHCP dnsmasq
  3. keepalived
  4. metadata-proxy

  These monitor will get some insufficient  log like:
  'default-service'  for router with uuid xxx-xxx-xxx-xxx-xxx not found.  The 
process should not have died
  respawning 'default-service' for uuid xxx-xxx-xxx-xxx-xxx

  The 'default-service' is insufficient for cloud administartor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp