[Yahoo-eng-team] [Bug 1676486] Re: Update row in glance images doesn't work

2017-07-03 Thread Vitalii Gridnev
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1676486

Title:
  Update row in glance images doesn't work

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Update row in glance images doesn't work, and it hangs in the 'Saving'
  state.

  Steps to reproduce:

  1. Create an image.
  2. Doesn't refresh the table. Image will stay in the 'Saving' or 'Queued' 
state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1676486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483254] [NEW] Swift fails to authenticate user by token

2015-08-10 Thread Vitalii
Public bug reported:

Ther'a 2 issues with authentication.

1. Consider the following code.

client = swift_api_client.Connection(
user=keystone.user.username,
preauthurl=url,
preauthtoken=keystone.user.token.id,
tenant_name=keystone.user.tenant_name,
insecure=insecure,
cacert=cacert)


Since ther's no ``auth_version`` specified, 1 used by default.
In this case swiftclient will try to use ``get_auth_1_0``:

storage_url, token = get_auth_1_0(auth_url,
  user,
  key,
  kwargs.get('snet'),
  insecure=insecure)


As you can see, no keystone token passed to that function, therefore 
authentication fails.
Furthermore, swiftclient will fail miserably with exception:

Traceback (most recent call last):
  File ./test_keystone.py, line 22, in module
t.run('blah', user_id=483)
  File /home/testproject/src/testproject_dev/stack/swift_task.py, line 75, in 
run
return self.test(swift_client)
  File /home/testproject/src/testproject_dev/stack/swift_task.py, line 45, in 
test
swift_client.get_capabilities()
  File 
/home/testproject/.virtualenvs/testproject/local/lib/python2.7/site-packages/swiftclient/client.py,
 line 1386, in get_capabilities
url, _ = self.get_auth()
  File 
/home/testproject/.virtualenvs/testproject/local/lib/python2.7/site-packages/swiftclient/client.py,
 line 1210, in get_auth
insecure=self.insecure)
  File 
/home/testproject/.virtualenvs/testproject/local/lib/python2.7/site-packages/swiftclient/client.py,
 line 377, in get_auth
insecure=insecure)
  File 
/home/testproject/.virtualenvs/testproject/local/lib/python2.7/site-packages/swiftclient/client.py,
 line 255, in get_auth_1_0
parsed, conn = http_connection(url, insecure=insecure)
  File 
/home/testproject/.virtualenvs/testproject/local/lib/python2.7/site-packages/swiftclient/client.py,
 line 249, in http_connection
conn = HTTPConnection(*arg, **kwarg)
  File 
/home/testproject/.virtualenvs/testproject/local/lib/python2.7/site-packages/swiftclient/client.py,
 line 156, in __init__
self.parsed_url = urlparse(url)
  File /usr/lib/python2.7/urlparse.py, line 143, in urlparse
tuple = urlsplit(url, scheme, allow_fragments)
  File /usr/lib/python2.7/urlparse.py, line 182, in urlsplit
i = url.find(':')
AttributeError: 'NoneType' object has no attribute 'find'



2. If you specify auth_version = 2, the following code will be executed.

elif auth_version in AUTH_VERSIONS_V2 + AUTH_VERSIONS_V3:
# We are allowing to specify a token/storage-url to re-use
# without having to re-authenticate.
if (os_options.get('object_storage_url') and
os_options.get('auth_token')):
return (os_options.get('object_storage_url'),
os_options.get('auth_token'))


It checks if there are ``object_storage_url`` and ``auth_token`` argumens were 
provided.
Of course they were absent, since initial values were: os_options or {}

So in order to get it working, you have to specify those options manually:

client.os_options = {
'object_storage_url': url,
'auth_token': keystone.user.token.id,
}


Conclusion. The only way to use swift client with existing tokens is the 
following:

def get_swift_client(self, keystone):
insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False)
cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None)
url = keystone.url_for('object-store')

client = swift_api_client.Connection(
user=keystone.user.username,
preauthurl=url,
preauthtoken=keystone.user.token.id,
tenant_name=keystone.user.tenant_name,
insecure=insecure,
cacert=cacert,
auth_version=2)

client.os_options = {
'object_storage_url': url,
'auth_token': keystone.user.token.id,
}
return client


I think you should fix version handling or reflect the way it works in
documentation.

** Affects: python-swiftclient
 Importance: Undecided
 Status: New

** Project changed: nova = python-swiftclient

** Description changed:

  Ther'a 2 issues with authentication.
  
  1. Consider the following code.
  
- client = swift_api_client.Connection(
- user=keystone.user.username,
- preauthurl=url,
- preauthtoken=keystone.user.token.id,
- tenant_name=keystone.user.tenant_name,
- insecure=insecure,
- cacert=cacert)
+ client = swift_api_client.Connection(
+ user=keystone.user.username,
+ preauthurl=url,
+ preauthtoken=keystone.user.token.id,
+ tenant_name=keystone.user.tenant_name,
+ 

[Yahoo-eng-team] [Bug 1478033] [NEW] neutron allows to create invalid floating IP

2015-07-24 Thread Vitalii
Public bug reported:

% neutron floatingip-create ISP_NET --floating-ip-address 31.28.168.167

But this is broadcast IP address.

% ping 31.28.168.167
Do you want to ping broadcast? Then -b

** Affects: neutron
 Importance: Undecided
 Assignee: Vitalii (vb-d)
 Status: Invalid

** Project changed: cinder = neutron

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: (unassigned) = Vitalii (vb-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478033

Title:
  neutron allows to create invalid floating IP

Status in neutron:
  Invalid

Bug description:
  % neutron floatingip-create ISP_NET --floating-ip-address
  31.28.168.167

  But this is broadcast IP address.

  % ping 31.28.168.167
  Do you want to ping broadcast? Then -b

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478080] [NEW] neutron allows to create invalid floating IP

2015-07-24 Thread Vitalii
Public bug reported:

Neutron allows to create floating IP 31.28.168.167, which is broadcast
address.

I think it should raise exception.

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478080

Title:
  neutron allows to create invalid floating IP

Status in neutron:
  New

Bug description:
  Neutron allows to create floating IP 31.28.168.167, which is broadcast
  address.

  I think it should raise exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470502] [NEW] nova uses outdated userdata

2015-07-01 Thread Vitalii
Public bug reported:

I stopped neutron, dropped its database, created new one, created new
external network and router.

New router has different IP address.

I restarted all nova services.

I created new instance. During boot I can see that cloud-init received
old network settings from nova.


P.S. I've dumped nova database and the only table where I can see old IP 
addresses was instance_info_caches.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470502

Title:
  nova uses outdated userdata

Status in OpenStack Compute (Nova):
  New

Bug description:
  I stopped neutron, dropped its database, created new one, created new
  external network and router.

  New router has different IP address.

  I restarted all nova services.

  I created new instance. During boot I can see that cloud-init received
  old network settings from nova.

  
  P.S. I've dumped nova database and the only table where I can see old IP 
addresses was instance_info_caches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470502] Re: nova uses outdated userdata

2015-07-01 Thread Vitalii
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470502

Title:
  nova uses outdated userdata

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I stopped neutron, dropped its database, created new one, created new
  external network and router.

  New router has different IP address.

  I restarted all nova services.

  I created new instance. During boot I can see that cloud-init received
  old network settings from nova.

  
  P.S. I've dumped nova database and the only table where I can see old IP 
addresses was instance_info_caches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446741] [NEW] global value for shutdown_timeout

2015-04-21 Thread Vitalii
Public bug reported:

Ther's  shutdown_timeout defined in nova.conf, which implies, that all
virtuam machines in cloud should get in time to turn off, no matter what
OS they have installed. Not logical from my point of view.

You have `image_os_shutdown_timeout` property, which used instead of
shutdown_timeout, but it stored in system_metadata and cannot be
overwritten.

In result user only has ACPI shutdown and no access to libvirt's destroy
method.

This is what pisses off the most in AWS: you are unable to forcefully
turn off VPS, when it hangs !


I use Juno and I think this is a bug.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446741

Title:
  global value for shutdown_timeout

Status in OpenStack Compute (Nova):
  New

Bug description:
  Ther's  shutdown_timeout defined in nova.conf, which implies, that all
  virtuam machines in cloud should get in time to turn off, no matter
  what OS they have installed. Not logical from my point of view.

  You have `image_os_shutdown_timeout` property, which used instead of
  shutdown_timeout, but it stored in system_metadata and cannot be
  overwritten.

  In result user only has ACPI shutdown and no access to libvirt's
  destroy method.

  This is what pisses off the most in AWS: you are unable to forcefully
  turn off VPS, when it hangs !

  
  I use Juno and I think this is a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431927] [NEW] netruon client parses arguments incorrectly

2015-03-13 Thread Vitalii
Public bug reported:

The following command worked in Icehouse. It do not work in Juno
anymore.

neutron net-create --tenant-id 7f41e236d56c4e9fa074a9185528cad2
--provider:network_type=flat --provider:physical_network=default
--router:external=True GATEWAY_NET

It returns error:

neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'


If you use spaces instead of '=':
neutron net-create --tenant-id e4e6d468a3ce4e8c8d6de73aa394e395 
--provider:network_type flat --provider:physical_network default 
--router:external true GATEWAY_NET

It raises the following:

Invalid values_specs GATEWAY_NET


And only if you use --name GATEWAY_NET it works. But documentation ( neutron 
help net-create ) tells you that name is positional argument !

positional arguments:
  NAME  Name of network to create.


** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431927

Title:
  netruon client parses arguments incorrectly

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following command worked in Icehouse. It do not work in Juno
  anymore.

  neutron net-create --tenant-id 7f41e236d56c4e9fa074a9185528cad2
  --provider:network_type=flat --provider:physical_network=default
  --router:external=True GATEWAY_NET

  It returns error:
  
  neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'
  

  If you use spaces instead of '=':
  neutron net-create --tenant-id e4e6d468a3ce4e8c8d6de73aa394e395 
--provider:network_type flat --provider:physical_network default 
--router:external true GATEWAY_NET

  It raises the following:
  
  Invalid values_specs GATEWAY_NET
  

  And only if you use --name GATEWAY_NET it works. But documentation ( neutron 
help net-create ) tells you that name is positional argument !
  
  positional arguments:
NAME  Name of network to create.
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1431927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426093] [NEW] nova logging error

2015-02-26 Thread Vitalii
Public bug reported:

I receie weird exception when I try to perform any action with nova ( nova list 
for example ).
I checked, error that should have been logged is Authorization failed for token.


Logged from file auth_token.py, line 825
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
msg = self.format(record)
  File 
/home/xentime/.virtualenvs/xentime/local/lib/python2.7/site-packages/nova/openstack/common/log.py,
 line 706, in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 732, in format
return fmt.format(record)
  File 
/home/xentime/.virtualenvs/xentime/local/lib/python2.7/site-packages/nova/openstack/common/log.py,
 line 670, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 471, in format
record.message = record.getMessage()
  File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
msg = msg % self.args
TypeError: __str__ returned non-string (type Error)


** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426093

Title:
  nova logging error

Status in OpenStack Compute (Nova):
  New

Bug description:
  I receie weird exception when I try to perform any action with nova ( nova 
list for example ).
  I checked, error that should have been logged is Authorization failed for 
token.

  
  Logged from file auth_token.py, line 825
  Traceback (most recent call last):
File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
  msg = self.format(record)
File 
/home/xentime/.virtualenvs/xentime/local/lib/python2.7/site-packages/nova/openstack/common/log.py,
 line 706, in format
  return logging.StreamHandler.format(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 732, in format
  return fmt.format(record)
File 
/home/xentime/.virtualenvs/xentime/local/lib/python2.7/site-packages/nova/openstack/common/log.py,
 line 670, in format
  return logging.Formatter.format(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 471, in format
  record.message = record.getMessage()
File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
  msg = msg % self.args
  TypeError: __str__ returned non-string (type Error)
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396664] [NEW] nova raises InvalidBDMBootSequence when boot index 0

2014-11-26 Thread Vitalii
Public bug reported:

I tried to boot server from volume using API. The following 
'block_device_mapping_v2' argument
{
  boot_index: 0,
  uuid: image_obj,
  source_type: image,
  volume_size: flavor_obj.disk,
  destination_type: volume,
  delete_on_termination: False
}

Nova raised InvalidBDMBootSequence exception. I checked  nova sources and found 
that _subsequent_list returns None,
when I use boot index 0:


def _validate_bdm(self, context, instance, instance_type, all_mappings):
def _subsequent_list(l):
return all(el + 1 == l[i + 1] for i, el in enumerate(l[:-1]))

# Make sure that the boot indexes make sense
boot_indexes = sorted([bdm['boot_index']
   for bdm in all_mappings
   if bdm.get('boot_index') is not None
   and bdm.get('boot_index') = 0])

if 0 not in boot_indexes or not _subsequent_list(boot_indexes):
raise exception.InvalidBDMBootSequence()


Maybe I don't know use case, but when I use boot_index 1, server boots,
but two block devices attached instead of one: /dev/vda and /dev/vdb.
Both point to the same device.


P.S. I use OpenStack Juno and cinder with LVM.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- I tried to boot server with using API. The following 
'block_device_mapping_v2' argument
+ I tried to boot server from volume using API. The following 
'block_device_mapping_v2' argument
  {
-   boot_index: 0,
-   uuid: image_obj,
-   source_type: image,
-   volume_size: flavor_obj.disk,
-   destination_type: volume,
-   delete_on_termination: False
+   boot_index: 0,
+   uuid: image_obj,
+   source_type: image,
+   volume_size: flavor_obj.disk,
+   destination_type: volume,
+   delete_on_termination: False
  }
  
  Nova raised InvalidBDMBootSequence exception. I checked  nova sources and 
found that _subsequent_list returns None,
  when I use boot index 0:
  
  
- def _validate_bdm(self, context, instance, instance_type, all_mappings):
- def _subsequent_list(l):
- return all(el + 1 == l[i + 1] for i, el in enumerate(l[:-1]))
- 
- # Make sure that the boot indexes make sense
- boot_indexes = sorted([bdm['boot_index']
-for bdm in all_mappings
-if bdm.get('boot_index') is not None
-and bdm.get('boot_index') = 0])
- 
- if 0 not in boot_indexes or not _subsequent_list(boot_indexes):
- raise exception.InvalidBDMBootSequence()
+ def _validate_bdm(self, context, instance, instance_type, all_mappings):
+ def _subsequent_list(l):
+ return all(el + 1 == l[i + 1] for i, el in enumerate(l[:-1]))
+ 
+ # Make sure that the boot indexes make sense
+ boot_indexes = sorted([bdm['boot_index']
+    for bdm in all_mappings
+    if bdm.get('boot_index') is not None
+    and bdm.get('boot_index') = 0])
+ 
+ if 0 not in boot_indexes or not _subsequent_list(boot_indexes):
+ raise exception.InvalidBDMBootSequence()
  
  
  Maybe I don't know use case, but when I use boot_index 1, server boots,
  but two block devices attached instead of one: /dev/vda and /dev/vdb.
  Both point to the same device.
+ 
+ 
+ P.S. I use OpenStack Juno and cinder with LVM.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396664

Title:
  nova raises InvalidBDMBootSequence when boot index 0

Status in OpenStack Compute (Nova):
  New

Bug description:
  I tried to boot server from volume using API. The following 
'block_device_mapping_v2' argument
  {
    boot_index: 0,
    uuid: image_obj,
    source_type: image,
    volume_size: flavor_obj.disk,
    destination_type: volume,
    delete_on_termination: False
  }

  Nova raised InvalidBDMBootSequence exception. I checked  nova sources and 
found that _subsequent_list returns None,
  when I use boot index 0:

  
  def _validate_bdm(self, context, instance, instance_type, all_mappings):
  def _subsequent_list(l):
  return all(el + 1 == l[i + 1] for i, el in enumerate(l[:-1]))

  # Make sure that the boot indexes make sense
  boot_indexes = sorted([bdm['boot_index']
     for bdm in all_mappings
     if bdm.get('boot_index') is not None
     and bdm.get('boot_index') = 0])

  if 0 not in boot_indexes or not _subsequent_list(boot_indexes):
  raise exception.InvalidBDMBootSequence()
  

  Maybe I don't know use case, but when I use boot_index 1, server
  boots, but two block devices attached instead of one: /dev/vda and
  

[Yahoo-eng-team] [Bug 1389728] [NEW] cinder do not import image from glance

2014-11-05 Thread Vitalii
Public bug reported:

Steps to reproduce:

1. Create raw bare glance image
2. Create LVM volume group
3. Boot instance with nova:

nova --debug boot --flavor m1.small --block-device
source=image,id=GLANCE IMAGE
ID,dest=volume,size=10,shutdown=preserve,bootindex=0 --nic net-
id=NEUTRON NET ID test

In cinder volume log I can see the following:

2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/osprofiler/profiler.py,
 line 105, in wrapper
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/manager.py,
 line 381, in create_volume
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
_run_flow()
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/manager.py,
 line 374, in _run_flow
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py,
 line 99, in run
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher for 
_state in self.run_iter():
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py,
 line 156, in run_iter
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/utils/misc.py,
 line 733, in reraise_if_any
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/utils/misc.py,
 line 740, in reraise
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py,
 line 35, in _execute_task
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py,
 line 638, in execute
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py,
 line 590, in _create_from_image
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher image_id, 
image_location, image_service)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py,
 line 492, in _copy_image_to_volume
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher raise 
exception.ImageCopyFailure(reason=ex.stderr)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
ImageCopyFailure: Failed to copy image to volume: qemu-img: error writing 
zeroes at sector 0: Invalid argument

The reason is this code:
cinder/image/image_utils.py:86

Several lines above, in the same function ther's:
   cmd = ('qemu-img', 'convert',
   '-O', out_format, source, dest)


[Yahoo-eng-team] [Bug 1385318] [NEW] Nova fails to add fixed IP

2014-10-24 Thread Vitalii
Public bug reported:

I created instance with one NIC attached.
Then I try to attach another NIC:

nova add-fixed-ip  ServerId NetworkId

Nova compute raises exception:

2014-10-24 15:40:33.925 31955 ERROR oslo.messaging.rpc.dispatcher 
[req-43570a05-937a-4ddf-a0e9-e05d42660817 ] Exception during message handling: 
Network could not be found for instance 09b6e137-37d6-475d-992c-bdcb7d3cb841.
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 414, in decorated_function
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py,
 line 88, in wrapped
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py,
 line 71, in wrapped
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 326, in decorated_function
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 314, in decorated_function
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 3915, in add_fixed_ip_to_instance
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
network_id)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/base_api.py,
 line 61, in wrapper
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher res = 
f(self, context, *args, **kwargs)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py,
 line 684, in add_fixed_ip_to_instance
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
instance_id=instance['uuid'])
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
NetworkNotFoundForInstance: Network could not be found for instance 
09b6e137-37d6-475d-992c-bdcb7d3cb841.
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 


In nova/network/neutronv2/api.py ther's line:

neutronv2.get_client(context).list_ports(**search_opts)

It cannot find port by device_id. Probably because nova do not create
port ?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 1384670] [NEW] neutron.conf missing policy_file option

2014-10-23 Thread Vitalii
Public bug reported:

In Juno, ther's no option policy_file in neutron.conf, which causes exception, 
when I start services:
2014-10-23 12:18:25.013 32561 TRACE neutron PolicyFileNotFound: Policy 
configuration policy.json could not be found

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384670

Title:
  neutron.conf missing policy_file option

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Juno, ther's no option policy_file in neutron.conf, which causes 
exception, when I start services:
  2014-10-23 12:18:25.013 32561 TRACE neutron PolicyFileNotFound: Policy 
configuration policy.json could not be found

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384731] [NEW] missing neutron l3 filter causes Traceback

2014-10-23 Thread Vitalii
Public bug reported:

In Juno release /rootwrap.d/l3.filters missing filter, which enables to
kill any process with -9

At least neutron-l3-agent raises exception when I restart it.

2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
self._destroy_namespace(ns)
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 642, in _destroy_namespace
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
self._destroy_metadata_proxy(ns[len(NS_PREFIX):], ns)
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 825, in _destroy_metadata_proxy
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent pm.disable()
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py,
 line 91, in disable
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
utils.execute(cmd, self.root_helper)
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py,
 line 84, in execute
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent RuntimeError:
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Command: ['sudo', 
'neutron-rootwrap', '/home/vb/etc/neutron/rootwrap.conf', 'kill', '-9', '2913']
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Exit code: 99
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Stdout: ''
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Stderr: 
'/usr/sbin/neutron-rootwrap: Unauthorized command: kill -9 2913 (no filter 
matched)\n'
2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  In Juno release /rootwrap.d/l3.filters missing filter, which enables to
  kill any process with -9
  
  At least neutron-l3-agent raises exception when I restart it.
  
- I added the following line and exception disappeared:
- kill_N: KillFilter, root, -9, -HUP
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
self._destroy_namespace(ns)
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 642, in _destroy_namespace
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
self._destroy_metadata_proxy(ns[len(NS_PREFIX):], ns)
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 825, in _destroy_metadata_proxy
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent pm.disable()
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py,
 line 91, in disable
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
utils.execute(cmd, self.root_helper)
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py,
 line 84, in execute
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent RuntimeError:
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Command: ['sudo', 
'neutron-rootwrap', '/home/vb/etc/neutron/rootwrap.conf', 'kill', '-9', '2913']
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Exit code: 99
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Stdout: ''
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent Stderr: 
'/usr/sbin/neutron-rootwrap: Unauthorized command: kill -9 2913 (no filter 
matched)\n'
+ 2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384731

Title:
  missing neutron l3 filter causes Traceback

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In Juno release /rootwrap.d/l3.filters missing filter, which enables
  to kill any process with -9

  At least neutron-l3-agent raises exception when I restart it.

  2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent 
self._destroy_namespace(ns)
  2014-10-23 15:01:47.664 11294 TRACE neutron.agent.l3_agent   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py,
 line 642, in _destroy_namespace
  

[Yahoo-eng-team] [Bug 1384231] [NEW] The number of neutron-ns-metadata-proxy processes grow uncontrollably

2014-10-22 Thread Vitalii
Public bug reported:

During testing and development I had to add and remove instances, routers, 
ports often. Also I restarted all neutron services often ( I use supervisor ).
After about one week, I noticed that I ran out of free RAM. It turned out there 
were tens of neutron-ns-metadata-proxy processes hanging. After I killed them 
and restarted neutron, I got 4 GB RAM freed.


...
20537 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a6f6aeaa-c325-42d6-95e2-d55d410fc5d9.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a6f6aeaa-c325-42d6-95e2-d55d410fc5d9 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
20816 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a4451c09-1655-4aea-86d6-849e563f4731.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a4451c09-1655-4aea-86d6-849e563f4731 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
30098 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b122a6ba-5614-4f1c-b0c6-95c6645dbab0.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b122a6ba-5614-4f1c-b0c6-95c6645dbab0 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
30557 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/82ebd418-b156-49bf-9633-af3121fc12f7.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=82ebd418-b156-49bf-9633-af3121fc12f7 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
31072 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/d426f959-bfc5-4012-b89e-aec64cc2cf03.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=d426f959-bfc5-4012-b89e-aec64cc2cf03 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
31378 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b8dc2dd7-18cb-4a56-9690-fc79248c5532.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b8dc2dd7-18cb-4a56-9690-fc79248c5532 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
...


** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384231

Title:
  The number of neutron-ns-metadata-proxy processes grow uncontrollably

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  During testing and development I had to add and remove instances, routers, 
ports often. Also I restarted all neutron services often ( I use supervisor ).
  After about one week, I noticed that I ran out of free RAM. It turned out 
there were tens of neutron-ns-metadata-proxy processes hanging. After I killed 
them and restarted neutron, I got 4 GB RAM freed.

  
  ...
  20537 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a6f6aeaa-c325-42d6-95e2-d55d410fc5d9.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a6f6aeaa-c325-42d6-95e2-d55d410fc5d9 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  20816 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a4451c09-1655-4aea-86d6-849e563f4731.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a4451c09-1655-4aea-86d6-849e563f4731 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  30098 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b122a6ba-5614-4f1c-b0c6-95c6645dbab0.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b122a6ba-5614-4f1c-b0c6-95c6645dbab0 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  30557 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/82ebd418-b156-49bf-9633-af3121fc12f7.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 

[Yahoo-eng-team] [Bug 1384235] [NEW] Nova raises exception about existing libvirt filter

2014-10-22 Thread Vitalii
Public bug reported:

Sometimes, when I start instance, nova raises exception, that
filter like nova-instance-instance-000b-52540039740a already exists.

So I have to execute `virsh nwfilter-undefine` and try to boot instance
again:

In libvirt logs I can see the following:

2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End of 
file while reading data: Input/output error

I use libvirt 1.2.8-3 ( Debian )

I have the following services defined:

service_plugins =
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- Sometimes, when I start instance, nova raises exception, that 
+ Sometimes, when I start instance, nova raises exception, that
  filter like nova-instance-instance-000b-52540039740a already exists.
  
  So I have to execute `virsh nwfilter-undefine` and try to boot instance
  again:
  
  In libvirt logs I can see the following:
  
  2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
  2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End 
of file while reading data: Input/output error
  
  I use libvirt 1.2.8-3 ( Debian )
+ 
+ I have the following services defined:
+ 
+ service_plugins =
+ 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384235

Title:
  Nova raises exception about existing libvirt filter

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Sometimes, when I start instance, nova raises exception, that
  filter like nova-instance-instance-000b-52540039740a already exists.

  So I have to execute `virsh nwfilter-undefine` and try to boot
  instance again:

  In libvirt logs I can see the following:

  2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
  2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End 
of file while reading data: Input/output error

  I use libvirt 1.2.8-3 ( Debian )

  I have the following services defined:

  service_plugins =
  
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384240] [NEW] It do not delete tap devices

2014-10-22 Thread Vitalii
Public bug reported:

After I added/removed routers, nets and subnets many times, for testing
purpose, I found that I have 45 interfaces:


b# ifconfig|grep encap:Ethernet
br-eth0   Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
br-eth0:1 Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
eth0  Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
int-br-eth0 Link encap:Ethernet  HWaddr 82:ad:59:16:ca:da  
phy-br-eth0 Link encap:Ethernet  HWaddr da:e4:8d:cd:38:43  
qbr056025bc-49 Link encap:Ethernet  HWaddr fa:c1:6d:fe:5e:b8  
qbr3dccedf9-e8 Link encap:Ethernet  HWaddr 02:42:c0:73:d2:0b  
qbr422898c8-df Link encap:Ethernet  HWaddr 76:b0:42:f5:ad:f9  
qbr69eb1f6a-71 Link encap:Ethernet  HWaddr d6:d2:7f:ee:1a:34  
qbr750aa557-b7 Link encap:Ethernet  HWaddr 5a:2e:b3:f3:30:b1  
qbr81eb2deb-b7 Link encap:Ethernet  HWaddr f2:d0:95:19:fb:c4  
qbr971c890b-8f Link encap:Ethernet  HWaddr ce:a1:c9:f0:b6:24  
qbr9ab03868-2f Link encap:Ethernet  HWaddr 2a:d5:8a:61:0a:a1  
qbrcfd38872-d1 Link encap:Ethernet  HWaddr 16:7d:ad:1b:4b:82  
qbrde55f70b-7d Link encap:Ethernet  HWaddr 2a:8c:f5:2b:49:99  
qbrdead1da5-98 Link encap:Ethernet  HWaddr d6:95:61:73:1d:c6  
qbrf0db8340-a6 Link encap:Ethernet  HWaddr 9e:9b:36:c3:cb:d3  
qbrf3f3c43f-ff Link encap:Ethernet  HWaddr 4a:c6:5c:70:e4:b4  
qvb056025bc-49 Link encap:Ethernet  HWaddr fa:c1:6d:fe:5e:b8  
qvb3dccedf9-e8 Link encap:Ethernet  HWaddr 02:42:c0:73:d2:0b  
qvb422898c8-df Link encap:Ethernet  HWaddr 76:b0:42:f5:ad:f9  
qvb69eb1f6a-71 Link encap:Ethernet  HWaddr d6:d2:7f:ee:1a:34  
qvb750aa557-b7 Link encap:Ethernet  HWaddr 5a:2e:b3:f3:30:b1  
qvb81eb2deb-b7 Link encap:Ethernet  HWaddr f2:d0:95:19:fb:c4  
qvb971c890b-8f Link encap:Ethernet  HWaddr ce:a1:c9:f0:b6:24  
qvb9ab03868-2f Link encap:Ethernet  HWaddr 2a:d5:8a:61:0a:a1  
qvbcfd38872-d1 Link encap:Ethernet  HWaddr 16:7d:ad:1b:4b:82  
qvbde55f70b-7d Link encap:Ethernet  HWaddr 2a:8c:f5:2b:49:99  
qvbdead1da5-98 Link encap:Ethernet  HWaddr d6:95:61:73:1d:c6  
qvbf0db8340-a6 Link encap:Ethernet  HWaddr 9e:9b:36:c3:cb:d3  
qvbf3f3c43f-ff Link encap:Ethernet  HWaddr 4a:c6:5c:70:e4:b4  
qvo056025bc-49 Link encap:Ethernet  HWaddr aa:82:8b:9f:d6:a0  
qvo3dccedf9-e8 Link encap:Ethernet  HWaddr ea:8c:1f:0e:ab:92  
qvo422898c8-df Link encap:Ethernet  HWaddr 7a:9f:47:3c:3b:57  
qvo69eb1f6a-71 Link encap:Ethernet  HWaddr a6:dd:41:ce:e6:39  
qvo750aa557-b7 Link encap:Ethernet  HWaddr 32:6c:f8:ca:af:e9  
qvo81eb2deb-b7 Link encap:Ethernet  HWaddr ea:22:94:19:ac:4c  
qvo971c890b-8f Link encap:Ethernet  HWaddr 2e:f8:a7:72:1c:85  
qvo9ab03868-2f Link encap:Ethernet  HWaddr aa:3e:bb:c6:6d:d3  
qvocfd38872-d1 Link encap:Ethernet  HWaddr 16:3a:12:30:f5:71  
qvode55f70b-7d Link encap:Ethernet  HWaddr fa:ee:28:ed:52:37  
qvodead1da5-98 Link encap:Ethernet  HWaddr 5a:66:51:d9:a5:60  
qvof0db8340-a6 Link encap:Ethernet  HWaddr 66:b6:23:c9:ca:73  
qvof3f3c43f-ff Link encap:Ethernet  HWaddr 5e:5b:53:e8:11:58  
tapdead1da5-98 Link encap:Ethernet  HWaddr fe:54:00:39:74:0a  


I use Icehouse.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  After I added/removed routers, nets and subnets many times, for testing
- purpose, I found that I have the following list of interfaces:
+ purpose, I found that I have 45 interfaces:
  
- # ifconfig 
+ 
+ b# ifconfig|grep encap:Ethernet
  br-eth0   Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
-   inet addr:172.19.29.147  Bcast:172.19.29.255  Mask:255.255.255.128
-   inet6 addr: fe80::d267:e5ff:fe03:180f/64 Scope:Link
-   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
-   RX packets:2039726347 errors:0 dropped:17762 overruns:0 frame:0
-   TX packets:5874295 errors:0 dropped:0 overruns:0 carrier:0
-   collisions:0 txqueuelen:0 
-   RX bytes:189626288472 (176.6 GiB)  TX bytes:610051438 (581.7 MiB)
+ br-eth0:1 Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
+ eth0  Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
+ int-br-eth0 Link encap:Ethernet  HWaddr 82:ad:59:16:ca:da  
+ phy-br-eth0 Link encap:Ethernet  HWaddr da:e4:8d:cd:38:43  
+ qbr056025bc-49 Link encap:Ethernet  HWaddr fa:c1:6d:fe:5e:b8  
+ qbr3dccedf9-e8 Link encap:Ethernet  HWaddr 02:42:c0:73:d2:0b  
+ qbr422898c8-df Link encap:Ethernet  HWaddr 76:b0:42:f5:ad:f9  
+ qbr69eb1f6a-71 Link encap:Ethernet  HWaddr d6:d2:7f:ee:1a:34  
+ qbr750aa557-b7 Link encap:Ethernet  HWaddr 5a:2e:b3:f3:30:b1  
+ qbr81eb2deb-b7 Link encap:Ethernet  HWaddr f2:d0:95:19:fb:c4  
+ qbr971c890b-8f Link encap:Ethernet  HWaddr ce:a1:c9:f0:b6:24  
+ qbr9ab03868-2f Link encap:Ethernet  HWaddr 2a:d5:8a:61:0a:a1  
+ qbrcfd38872-d1 Link encap:Ethernet  HWaddr 16:7d:ad:1b:4b:82  
+ qbrde55f70b-7d Link encap:Ethernet  HWaddr 2a:8c:f5:2b:49:99  
+ qbrdead1da5-98 Link encap:Ethernet  HWaddr d6:95:61:73:1d:c6  
+ qbrf0db8340-a6 Link encap:Ethernet  HWaddr 9e:9b:36:c3:cb:d3  
+ qbrf3f3c43f-ff Link encap:Ethernet  HWaddr 4a:c6:5c:70:e4:b4  
+ qvb056025bc-49 Link encap:Ethernet