[Yahoo-eng-team] [Bug 1491276] Re: During evacuate cinder-api tries to call cinder-volume of halted node

2015-09-11 Thread Eduard Biceri-Matei
Ok, so i should try to run cinder-volume on a different node.
Although in our case it might bring other problems as the backend is a local 
file system (although distributed using an external component) on each compute 
node and we need cinder-volume to run commands locally.

So this ticket is invalid.

Thanks.

** No longer affects: cinder

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491276

Title:
  During evacuate cinder-api tries to call cinder-volume of halted node

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Three nodes devstack Juno (2012.2.4) with shared storage.

  One instance running on second node of the cluster.
  Halted the host, instance still shows status "Running".
  Tried to evacuate the host but got following errors:

  (n-cpu log)

  2015-09-01 19:19:50.291 DEBUG nova.openstack.common.lockutils 
[req-56b30762-dad3-410d-a50d-a74318c43198 admin admin] Semaphore / lock 
released "update_usage" from (pid=16720) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:275
  2015-09-01 19:19:50.296 ERROR oslo.messaging.rpc.dispatcher 
[req-56b30762-dad3-410d-a50d-a74318c43198 admin admin] Exception during message 
handling: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-dcf02ffd-0916-45b1-b2a3-4623e71d340c)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 430, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 314, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 286, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 364, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 342, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-09-01 19:19:50.296 TRACE 

[Yahoo-eng-team] [Bug 1491276] Re: During evacuate cinder-api tries to call cinder-volume of halted node

2015-09-10 Thread Eduard Biceri-Matei
(bump)

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491276

Title:
  During evacuate cinder-api tries to call cinder-volume of halted node

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Three nodes devstack Juno (2012.2.4) with shared storage.

  One instance running on second node of the cluster.
  Halted the host, instance still shows status "Running".
  Tried to evacuate the host but got following errors:

  (n-cpu log)

  2015-09-01 19:19:50.291 DEBUG nova.openstack.common.lockutils 
[req-56b30762-dad3-410d-a50d-a74318c43198 admin admin] Semaphore / lock 
released "update_usage" from (pid=16720) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:275
  2015-09-01 19:19:50.296 ERROR oslo.messaging.rpc.dispatcher 
[req-56b30762-dad3-410d-a50d-a74318c43198 admin admin] Exception during message 
handling: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-dcf02ffd-0916-45b1-b2a3-4623e71d340c)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 430, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 314, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 286, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 364, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 342, in decorated_function
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-01 19:19:50.296 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1483535] [NEW] Cannot create image: NotAuthenticated

2015-08-11 Thread Eduard Biceri-Matei
Public bug reported:

Devstack Juno (2014.2.4) on Ubuntu 14.04.
Local.conf:

[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
LOGDIR=/opt/stack/logs
HOST_IP=192.168.10.214
FLAT_INTERFACE=eth0
FIXED_RANGE=172.22.10.0/24
FIXED_NETWORK_SIZE=255
FLOATING_RANGE=192.168.10.0/24
MULTI_HOST=1
ADMIN_PASSWORD=PASSW
MYSQL_PASSWORD=PASSW
RABBIT_PASSWORD=PASSW
SERVICE_PASSWORD=PASSW
SERVICE_TOKEN=PASSW
KEYSTONE_BRANCH=stable/juno
NOVA_BRANCH=stable/juno
NEUTRON_BRANCH=stable/juno
SWIFT_BRANCH=stable/juno
GLANCE_BRANCH=stable/juno
CINDER_BRANCH=stable/juno
HEAT_BRANCH=stable/juno
TROVE_BRANCH=stable/juno
HORIZON_BRANCH=stable/juno

Exported vars:
export OS_USERNAME=admin
export OS_PASSWORD=PASSW # password set on first node:
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.10.214:35357/v2.0

Glance uses local storage (directory): /opt/stack/data/glance/images

Conf:
- glance-api:
[DEFAULT]
workers = 2
filesystem_store_datadir = /opt/stack/data/glance/images/
rabbit_hosts = 192.168.10.214
rpc_backend = glance.openstack.common.rpc.impl_kombu
notification_driver = messaging
use_syslog = False
sql_connection = mysql://root:rooter@127.0.0.1/glance?charset=utf8
debug = True
# Show more verbose log output (sets INFO log level output)
#verbose = False

# Show debugging output in logs (sets DEBUG log level output)
#debug = False

# Which backend scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store = file

# Maximum image size (in bytes) that may be uploaded through the
# Glance API server. Defaults to 1 TB.
# WARNING: this value should only be increased after careful consideration
# and must be set to a value under 8 EB (9223372036854775808).
#image_size_cap = 1099511627776

# Address to bind the API server
bind_host = 0.0.0.0

# Port the bind the API server to
bind_port = 9292

# Log to this file. Make sure you do not set the same log file for both the API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
#log_file = /var/log/glance/api.log

# Backlog requests when creating socket
backlog = 4096

# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle = 600

# API to use for accessing data. Default value points to sqlalchemy
# package, it is also possible to use: glance.db.registry.api
# data_api = glance.db.sqlalchemy.api

# The number of child process workers that will be
# created to service API requests. The default will be
# equal to the number of CPUs available. (integer value)
#workers = 4

# Maximum line size of message headers to be accepted.
# max_header_line may need to be increased when using large tokens
# (typically those generated by the Keystone v3 API with big service
# catalogs)
# max_header_line = 16384

# Role used to identify an authenticated user as administrator
#admin_role = admin

# Allow unauthenticated users to access the API with read-only
# privileges. This only applies when using ContextMiddleware.
#allow_anonymous_access = False

# Allow access to version 1 of glance api
#enable_v1_api = True

# Allow access to version 2 of glance api
#enable_v2_api = True

# Return the URL that references where the data is stored on
# the backend storage system.  For example, if using the
# file system store a URL of 'file:///path/to/image' will
# be returned to the user in the 'direct_url' meta-data field.
# The default value is false.
#show_image_direct_url = False

# Send headers containing user and tenant information when making requests to
# the v1 glance registry. This allows the registry to function as if a user is
# authenticated without the need to authenticate a user itself using the
# auth_token middleware.
# The default value is false.
#send_identity_headers = False

# Supported values for the 'container_format' image attribute
#container_formats=ami,ari,aki,bare,ovf,ova

# Supported values for the 'disk_format' image attribute
#disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso

# Directory to use for lock files. Default to a temp directory
# (string value). This setting needs to be the same for both
# glance-scrubber and glance-api.
#lock_path=None

# Property Protections config file
# This file contains the rules for property protections and the roles/policies
# associated with it.
# If this config value is not specified, by default, property protections
# won't be enforced.
# If a value is specified and the file is not found, then the glance-api
# service will not start.
#property_protection_file =

# Specify whether 'roles' or 'policies' are used in the
# property_protection_file.
# The default value for property_protection_rule_format is 'roles'.
#property_protection_rule_format = roles

# This value sets what strategy will be used to determine the image location
# order. Currently two strategies are packaged with 

[Yahoo-eng-team] [Bug 1439689] [NEW] Cannot see update notifications from nova.api

2015-04-02 Thread Eduard Biceri-Matei
Public bug reported:

Running devstack K, trying to get the notification from nova.api,
especially the instance update.

Enabled in config:
[DEFAULT]
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_topics=notifications,monitor
notify_on_state_change=vm_and_task_state
notify_on_any_change=True
instance_usage_audit=True
instance_usage_audit_period=hour

I can see some notifications coming (rabbitmqctl list_queues | grep
notifications.info), but when i rename an instance (which calls into
servers.py - update) i don't see the message count increasing, meaning
the message is not being sent to rabbitmq.

I can see the action being called:
2015-04-02 15:12:23.738 DEBUG nova.api.openstack.wsgi 
[req-b8fc8e5e-c339-4329-9f01-3d6e57d2f14b admin admin] Action: 'update', 
calling method: bound method Controller.update of 
nova.api.openstack.compute.servers.Controller object at 0x7f9e9c382190, 
body: {server: {name: instance3}}  but i don't see the notification.

Related: https://ask.openstack.org/en/question/62331/how-to-receive-
nova-notification-computeinstanceupdate/

/opt/stack/nova# git log -1
commit fc5e6315afb7fc90c6f80bd1dfed0babfa979f2f
Merge: 86c8611 af4ce3e
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Mar 27 04:10:19 2015 +

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439689

Title:
  Cannot see update notifications from nova.api

Status in OpenStack Compute (Nova):
  New

Bug description:
  Running devstack K, trying to get the notification from nova.api,
  especially the instance update.

  Enabled in config:
  [DEFAULT]
  notification_driver=nova.openstack.common.notifier.rpc_notifier
  notification_topics=notifications,monitor
  notify_on_state_change=vm_and_task_state
  notify_on_any_change=True
  instance_usage_audit=True
  instance_usage_audit_period=hour

  I can see some notifications coming (rabbitmqctl list_queues | grep
  notifications.info), but when i rename an instance (which calls into
  servers.py - update) i don't see the message count increasing, meaning
  the message is not being sent to rabbitmq.

  I can see the action being called:
  2015-04-02 15:12:23.738 DEBUG nova.api.openstack.wsgi 
[req-b8fc8e5e-c339-4329-9f01-3d6e57d2f14b admin admin] Action: 'update', 
calling method: bound method Controller.update of 
nova.api.openstack.compute.servers.Controller object at 0x7f9e9c382190, 
body: {server: {name: instance3}}  but i don't see the notification.

  Related: https://ask.openstack.org/en/question/62331/how-to-receive-
  nova-notification-computeinstanceupdate/

  /opt/stack/nova# git log -1
  commit fc5e6315afb7fc90c6f80bd1dfed0babfa979f2f
  Merge: 86c8611 af4ce3e
  Author: Jenkins jenk...@review.openstack.org
  Date:   Fri Mar 27 04:10:19 2015 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424532] [NEW] setup() takes at least 2 arguments (1 given)

2015-02-22 Thread Eduard Biceri-Matei
Public bug reported:

Trying to install devstack K on CI slave.

Local conf:
[[local|localrc]]
HOST_IP=10.140.0.3
FLAT_INTERFACE=eth0
FIXED_RANGE=10.150.0.0/16
FIXED_NETWORK_SIZE=255
FLOATING_RANGE=10.140.0.0/16 
PUBLIC_NETWORK_GATEWAY=10.140.0.3
NETWORK_GATEWAY=10.150.0.1
MULTI_HOST=0
SYSLOG=False
SCREEN_LOGDIR=/opt/stack/logs/screen-logs
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=*
MYSQL_PASSWORD=*
RABBIT_PASSWORD=*
SERVICE_PASSWORD=*
SERVICE_TOKEN=*
CINDER_REPO=https://review.openstack.org/openstack/cinder
CINDER_BRANCH=refs/changes/01/152401/21

Error during install:
15-02-23 06:30:11.138 | ++ ls /opt/stack/status/stack/n-novnc.failure
2015-02-23 06:30:11.141 | + failures=/opt/stack/status/stack/n-novnc.failure
2015-02-23 06:30:11.141 | + for service in '$failures'
2015-02-23 06:30:11.142 | ++ basename /opt/stack/status/stack/n-novnc.failure
2015-02-23 06:30:11.143 | + service=n-novnc.failure
2015-02-23 06:30:11.143 | + service=n-novnc
2015-02-23 06:30:11.143 | + echo 'Error: Service n-novnc is not running'
2015-02-23 06:30:11.143 | Error: Service n-novnc is not running
2015-02-23 06:30:11.143 | + '[' -n /opt/stack/status/stack/n-novnc.failure ']'
2015-02-23 06:30:11.143 | + die 1494 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
2015-02-23 06:30:11.143 | + local exitcode=0
2015-02-23 06:30:11.143 | [Call Trace]
2015-02-23 06:30:11.143 | /opt/devstack/stack.sh:1297:service_check
2015-02-23 06:30:11.143 | /opt/devstack/functions-common:1494:die
2015-02-23 06:30:11.147 | [ERROR] /opt/devstack/functions-common:1494 More 
details about the above errors can be found with screen, with ./rejoin-stack.sh
2015-02-23 06:30:12.151 | Error on exit

Novnc screen:
stack@d-p-c-local-01-995:/opt/devstack$ /usr/local/bin/nova-novncproxy 
--config- 
file /etc/nova/nova.conf --web /opt/stack/noVNC  echo $! 
/opt/stack/status/sta 
ck/n-novnc.pid; fg || echo n-novnc failed to start | tee 
/opt/stack/status/st 
ack/n-novnc.failure
[1] 10200
/usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web 
/opt/stack/noVNC
Traceback (most recent call last):
  File /usr/local/bin/nova-novncproxy, line 9, in module
load_entry_point('nova==2015.1.dev387', 'console_scripts', 
'nova-novncproxy')()
  File /opt/stack/nova/nova/cmd/novncproxy.py, line 45, in main
port=CONF.novncproxy_port)
  File /opt/stack/nova/nova/cmd/baseproxy.py, line 57, in proxy
logging.setup(nova)
TypeError: setup() takes at least 2 arguments (1 given)
n-novnc failed to start
stack@d-p-c-local-01-995:/opt/devstack$

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424532

Title:
  setup() takes at least 2 arguments (1 given)

Status in OpenStack Compute (Nova):
  New

Bug description:
  Trying to install devstack K on CI slave.

  Local conf:
  [[local|localrc]]
  HOST_IP=10.140.0.3
  FLAT_INTERFACE=eth0
  FIXED_RANGE=10.150.0.0/16
  FIXED_NETWORK_SIZE=255
  FLOATING_RANGE=10.140.0.0/16 
  PUBLIC_NETWORK_GATEWAY=10.140.0.3
  NETWORK_GATEWAY=10.150.0.1
  MULTI_HOST=0
  SYSLOG=False
  SCREEN_LOGDIR=/opt/stack/logs/screen-logs
  LOGFILE=/opt/stack/logs/stack.sh.log
  ADMIN_PASSWORD=*
  MYSQL_PASSWORD=*
  RABBIT_PASSWORD=*
  SERVICE_PASSWORD=*
  SERVICE_TOKEN=*
  CINDER_REPO=https://review.openstack.org/openstack/cinder
  CINDER_BRANCH=refs/changes/01/152401/21

  Error during install:
  15-02-23 06:30:11.138 | ++ ls /opt/stack/status/stack/n-novnc.failure
  2015-02-23 06:30:11.141 | + failures=/opt/stack/status/stack/n-novnc.failure
  2015-02-23 06:30:11.141 | + for service in '$failures'
  2015-02-23 06:30:11.142 | ++ basename /opt/stack/status/stack/n-novnc.failure
  2015-02-23 06:30:11.143 | + service=n-novnc.failure
  2015-02-23 06:30:11.143 | + service=n-novnc
  2015-02-23 06:30:11.143 | + echo 'Error: Service n-novnc is not running'
  2015-02-23 06:30:11.143 | Error: Service n-novnc is not running
  2015-02-23 06:30:11.143 | + '[' -n /opt/stack/status/stack/n-novnc.failure ']'
  2015-02-23 06:30:11.143 | + die 1494 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
  2015-02-23 06:30:11.143 | + local exitcode=0
  2015-02-23 06:30:11.143 | [Call Trace]
  2015-02-23 06:30:11.143 | /opt/devstack/stack.sh:1297:service_check
  2015-02-23 06:30:11.143 | /opt/devstack/functions-common:1494:die
  2015-02-23 06:30:11.147 | [ERROR] /opt/devstack/functions-common:1494 More 
details about the above errors can be found with screen, with ./rejoin-stack.sh
  2015-02-23 06:30:12.151 | Error on exit

  Novnc screen:
  stack@d-p-c-local-01-995:/opt/devstack$ /usr/local/bin/nova-novncproxy 
--config- 
  file /etc/nova/nova.conf --web /opt/stack/noVNC  echo $! 
/opt/stack/status/sta 
  ck/n-novnc.pid; fg || echo n-novnc failed to start | tee 
/opt/stack/status/st 
  

[Yahoo-eng-team] [Bug 1404817] [NEW] On a /16 range floating ip is assigned /32 - cannot connect

2014-12-22 Thread Eduard Biceri-Matei
Public bug reported:

Devstack 2015.1, single box.

Trying to get floating ips to work across 2 /24 blocks  (on a /16 subnet)
Localrc:

HOST_IP=10.100.130.8 # public ip of host, 10.100.0.0/16 subnet
FLAT_INTERFACE=eth0
FIXED_RANGE=10.140.129.0/24 # private 
FIXED_NETWORK_SIZE=255
FLOATING_RANGE=10.100.129.0/16 #publicly reachable network for vms, also in 
10.100.0.0/16, but only 10.100.129.X block
MULTI_HOST=0
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=*
MYSQL_PASSWORD=*
RABBIT_PASSWORD=*
SERVICE_PASSWORD=*
SERVICE_TOKEN=*

Creating a guest :
assigned ips private=10.140.129.3, 10.100.129.2

On the host itself (ip a l)
inet 10.100.129.2/32 scope global br100
   valid_lft forever preferred_lft forever


Address is not accessible from the outside (because it's a /32, expected a /16)

Initial pool was created with range 10.100.0.0 - 10.100.255.255 and then
started assigning ips from 10.100.0.1 conflicting with the router, and
also ips were assigned /32

I deleted the pool and created a new one: 10.100.129.0 - 10.100.129.255
but ips are still /32.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404817

Title:
  On a /16 range floating ip is assigned /32 - cannot connect

Status in OpenStack Compute (Nova):
  New

Bug description:
  Devstack 2015.1, single box.

  Trying to get floating ips to work across 2 /24 blocks  (on a /16 subnet)
  Localrc:

  HOST_IP=10.100.130.8 # public ip of host, 10.100.0.0/16 subnet
  FLAT_INTERFACE=eth0
  FIXED_RANGE=10.140.129.0/24 # private 
  FIXED_NETWORK_SIZE=255
  FLOATING_RANGE=10.100.129.0/16 #publicly reachable network for vms, also in 
10.100.0.0/16, but only 10.100.129.X block
  MULTI_HOST=0
  LOGFILE=/opt/stack/logs/stack.sh.log
  ADMIN_PASSWORD=*
  MYSQL_PASSWORD=*
  RABBIT_PASSWORD=*
  SERVICE_PASSWORD=*
  SERVICE_TOKEN=*

  Creating a guest :
  assigned ips private=10.140.129.3, 10.100.129.2

  On the host itself (ip a l)
  inet 10.100.129.2/32 scope global br100
 valid_lft forever preferred_lft forever

  
  Address is not accessible from the outside (because it's a /32, expected a 
/16)

  Initial pool was created with range 10.100.0.0 - 10.100.255.255 and
  then started assigning ips from 10.100.0.1 conflicting with the
  router, and also ips were assigned /32

  I deleted the pool and created a new one: 10.100.129.0 -
  10.100.129.255 but ips are still /32.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404817] Re: On a /16 range floating ip is assigned /32 - cannot connect

2014-12-22 Thread Eduard Biceri-Matei
Issue with secgroup not with addressing. 
 

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404817

Title:
  On a /16 range floating ip is assigned /32 - cannot connect

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Devstack 2015.1, single box.

  Trying to get floating ips to work across 2 /24 blocks  (on a /16 subnet)
  Localrc:

  HOST_IP=10.100.130.8 # public ip of host, 10.100.0.0/16 subnet
  FLAT_INTERFACE=eth0
  FIXED_RANGE=10.140.129.0/24 # private 
  FIXED_NETWORK_SIZE=255
  FLOATING_RANGE=10.100.129.0/16 #publicly reachable network for vms, also in 
10.100.0.0/16, but only 10.100.129.X block
  MULTI_HOST=0
  LOGFILE=/opt/stack/logs/stack.sh.log
  ADMIN_PASSWORD=*
  MYSQL_PASSWORD=*
  RABBIT_PASSWORD=*
  SERVICE_PASSWORD=*
  SERVICE_TOKEN=*

  Creating a guest :
  assigned ips private=10.140.129.3, 10.100.129.2

  On the host itself (ip a l)
  inet 10.100.129.2/32 scope global br100
 valid_lft forever preferred_lft forever

  
  Address is not accessible from the outside (because it's a /32, expected a 
/16)

  Initial pool was created with range 10.100.0.0 - 10.100.255.255 and
  then started assigning ips from 10.100.0.1 conflicting with the
  router, and also ips were assigned /32

  I deleted the pool and created a new one: 10.100.129.0 -
  10.100.129.255 but ips are still /32.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394569] [NEW] KeyError extra_specs in _cold_migrate

2014-11-20 Thread Eduard Biceri-Matei
Public bug reported:

Trying cold migrate

stack@o11n200:/root$ nova list
+--+---+++-++
| ID   | Name  | Status | Task 
State | Power State | Networks   |
+--+---+++-++
| 476324ba-644c-472b-9de7-e4434d8211bc | fedora_instance_1 | ACTIVE | - 
 | Running | private=10.140.0.2 |
| 3dae14aa-9168-4b5c-bb7f-3f38315dc791 | fedora_instance_2 | ACTIVE | - 
 | Running | private=10.140.0.6 |
| e80b5863-f228-49ac-b35f-8d10a0d6e4eb | fedora_instance_3 | ACTIVE | - 
 | Running | private=10.140.0.8 |
| 05b2ea29-ebcd-4a25-9582-6fbd223fba73 | fedora_instance_4 | ACTIVE | - 
 | Running | private=10.140.0.7 |
+--+---+++-++
stack@o11n200:/root$ nova migrate fedora_instance_3
ERROR (BadRequest): The server could not comply with the request since it is 
either malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-86801856-94cf-43c1-b21e-cb084abf8aac)

Screen (n-api)

2014-11-20 14:01:48.426 ERROR nova.api.openstack.compute.contrib.admin_actions 
[req-86801856-94cf-43c1-b21e-cb084abf8aac admin admin] Error in migrate 
u'\'extra_specs\'\nTraceb
ack (most recent call last):\n\n  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n\n  
File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n\n  File /u
sr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n\n  File /usr/loca
l/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py, line 139, in 
inner\nreturn func(*args, **kwargs)\n\n  File 
/opt/stack/nova/nova/conductor/manager.py, line 49
0, in migrate_server\nreservations)\n\n  File 
/opt/stack/nova/nova/conductor/manager.py, line 550, in _cold_migrate\n
quotas.rollback()\n\n  File /usr/local/lib/pytho
n2.7/dist-packages/oslo/utils/excutils.py, line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n\n  File 
/opt/stack/nova/nova/conductor/manager.py, line 5
36, in _cold_migrate\n
request_spec[\'instance_type\'].pop(\'extra_specs\')\n\nKeyError: 
\'extra_specs\'\n'
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions 
Traceback (most recent call last):
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/api/openstack/compute/contrib/admin_actions.py, 
line 151, in _migrate
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   self.compute_api.resize(req.environ['nova.context'], instance)
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/compute/api.py, line 221, in wrapped
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   return func(self, context, target, *args, **kwargs)
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/compute/api.py, line 211, in inner
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   return function(self, context, instance, *args, **kwargs)
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/compute/api.py, line 238, in _wrapped
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   return fn(self, context, instance, *args, **kwargs)
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/compute/api.py, line 192, in inner
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   return f(self, context, instance, *args, **kw)
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/compute/api.py, line 2598, in resize
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   reservations=quotas.reservations or [])
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/conductor/api.py, line 345, in resize_instance
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   None, None, reservations)
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
 File /opt/stack/nova/nova/conductor/rpcapi.py, line 414, in migrate_server
2014-11-20 14:01:48.426 TRACE nova.api.openstack.compute.contrib.admin_actions  
   reservations=reservations)

[Yahoo-eng-team] [Bug 1394571] [NEW] AttributeError at /admin/instances/UUID/detail

2014-11-20 Thread Eduard Biceri-Matei
Public bug reported:

Created an instance, clicked on it to get to details:

AttributeError at /admin/instances/05b2ea29-ebcd-4a25-9582-6fbd223fba73/detail
display_name
Request Method: GET
Request URL:
http://10.130.11.200/admin/instances/05b2ea29-ebcd-4a25-9582-6fbd223fba73/detail
Django Version: 1.6.8
Exception Type: AttributeError
Exception Value:
display_name
Exception Location: 
/usr/local/lib/python2.7/dist-packages/cinderclient/openstack/common/apiclient/base.py
 in __getattr__, line 463
Python Executable:  /usr/bin/python
Python Version: 2.7.6
Python Path:
['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
 '/opt/stack/keystone',
 '/opt/stack/glance',
 '/opt/stack/cinder',
 '/opt/stack/nova',
 '/opt/stack/horizon',
 '/opt/stack/heat',
 '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-x86_64-linux-gnu',
 '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old',
 '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages',
 '/opt/stack/horizon/openstack_dashboard']
Server time:Thu, 20 Nov 2014 13:06:09 +

Environment:


Request Method: GET
Request URL: 
http://IP/admin/instances/05b2ea29-ebcd-4a25-9582-6fbd223fba73/detail

Django Version: 1.6.8
Python Version: 2.7.6
Installed Applications:
['openstack_dashboard.dashboards.project',
 'openstack_dashboard.dashboards.admin',
 'openstack_dashboard.dashboards.identity',
 'openstack_dashboard.dashboards.settings',
 'openstack_dashboard',
 'django.contrib.contenttypes',
 'django.contrib.auth',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',
 'django.contrib.humanize',
 'django_pyscss',
 'openstack_dashboard.django_pyscss_fix',
 'compressor',
 'horizon',
 'openstack_auth']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
 'django.middleware.csrf.CsrfViewMiddleware',
 'django.contrib.sessions.middleware.SessionMiddleware',
 'django.contrib.auth.middleware.AuthenticationMiddleware',
 'django.contrib.messages.middleware.MessageMiddleware',
 'horizon.middleware.HorizonMiddleware',
 'django.middleware.doc.XViewMiddleware',
 'django.middleware.locale.LocaleMiddleware',
 'django.middleware.clickjacking.XFrameOptionsMiddleware')


Traceback:
File /usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py in 
get_response
  112. response = wrapped_callback(request, *callback_args, 
**callback_kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py 
in dec
  36. return view_func(request, *args, **kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py 
in dec
  84. return view_func(request, *args, **kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py 
in dec
  52. return view_func(request, *args, **kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py 
in dec
  36. return view_func(request, *args, **kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py 
in dec
  84. return view_func(request, *args, **kwargs)
File /usr/local/lib/python2.7/dist-packages/django/views/generic/base.py in 
view
  69. return self.dispatch(request, *args, **kwargs)
File /usr/local/lib/python2.7/dist-packages/django/views/generic/base.py in 
dispatch
  87. return handler(request, *args, **kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tabs/views.py 
in get
  71. context = self.get_context_data(**kwargs)
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py
 in get_context_data
  255. context = super(DetailView, self).get_context_data(**kwargs)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tabs/views.py 
in get_context_data
  56. exceptions.handle(self.request)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/exceptions.py 
in handle
  334. six.reraise(exc_type, exc_value, exc_traceback)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tabs/views.py 
in get_context_data
  51. tab_group = self.get_tabs(self.request, **kwargs)
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py
 in get_tabs
  303. instance = self.get_data()
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/memoized.py 
in wrapped
  90. value = cache[key] = func(*args, **kwargs)
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py
 in get_data
  289.   redirect=redirect)
File /opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/exceptions.py 
in handle
  334. six.reraise(exc_type, exc_value, exc_traceback)
File