[Yahoo-eng-team] [Bug 1340068] [NEW] Useless option mute_weight_value

2014-07-10 Thread Alvaro Lopez
Public bug reported:

The 'mute_weight_value' option for the 'MuteChildWeigher' weigher is
useless.

This configuration option was used to artificially inflate the returned
weight for a cell that was unavailable, but it is not needed anymore and
a multiplier should be used instead. Since the normalization process is
already in place, this variable has no effect at all and a muted child
will get a weight of 1.0 regardless of the 'mute_weight_value'.

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340068

Title:
  Useless option mute_weight_value

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The 'mute_weight_value' option for the 'MuteChildWeigher' weigher is
  useless.

  This configuration option was used to artificially inflate the
  returned weight for a cell that was unavailable, but it is not needed
  anymore and a multiplier should be used instead. Since the
  normalization process is already in place, this variable has no effect
  at all and a muted child will get a weight of 1.0 regardless of the
  'mute_weight_value'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340082] [NEW] Floating IP does not assign by dhcp after soft reboot instance

2014-07-10 Thread Cristiano
Public bug reported:

ISSUE: Floating IP does not assign by dhcp after soft reboot instance.

I don't know whether this is a bug or this is relate with OS image.
But I repeat 20x 'soft reboot instance', no response from DHCP. Floating IP 
lost.

I try to do 'hard reboot instance' and reboot instance with SSH. It's
working.

I'm very confuse about different from 'soft reboot' between 'hard
reboot'.

By the way , I make sure test environment's physical network is perfect.

Gracias

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: network testing

** Attachment added: "picure show this issue."
   https://bugs.launchpad.net/bugs/1340082/+attachment/4149424/+files/IP.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340082

Title:
  Floating IP does not assign by dhcp after soft reboot instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  ISSUE: Floating IP does not assign by dhcp after soft reboot instance.

  I don't know whether this is a bug or this is relate with OS image.
  But I repeat 20x 'soft reboot instance', no response from DHCP. Floating IP 
lost.

  I try to do 'hard reboot instance' and reboot instance with SSH. It's
  working.

  I'm very confuse about different from 'soft reboot' between 'hard
  reboot'.

  By the way , I make sure test environment's physical network is
  perfect.

  Gracias

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339273] Re: Sphinx documentation build failed in stable/havana: source_dir is not a directory

2014-07-10 Thread Julie Pichon
** Also affects: horizon/havana
   Importance: Undecided
   Status: New

** Changed in: horizon/havana
 Assignee: (unassigned) => Tristan Cacqueray (tristan-cacqueray)

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon/havana
   Status: New => Fix Committed

** Changed in: horizon/havana
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339273

Title:
  Sphinx documentation build failed in stable/havana: source_dir is not
  a directory

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed

Bug description:
  Documentation is not building in stable/havana:

  $ tox -evenv -- python setup.py build_sphinx
  venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
  venv runtests: PYTHONHASHSEED='1422458638'
  venv runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
  ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340145] [NEW] Extract CommonDBMixin into a separate module

2014-07-10 Thread Eugene Nikanorov
Public bug reported:

Several service plugins are inheritig from CommonDBMixin which has a few
utulity methods.

Right now CommonDBMixin resides in db_base_plugin_v2.py so those plugins 
require to import it.
In some cases it is undesirable and can lead to cycles in imports,
so CommonDBMixin needs to be extracted into a different module

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340145

Title:
  Extract CommonDBMixin into a separate module

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Several service plugins are inheritig from CommonDBMixin which has a
  few utulity methods.

  Right now CommonDBMixin resides in db_base_plugin_v2.py so those plugins 
require to import it.
  In some cases it is undesirable and can lead to cycles in imports,
  so CommonDBMixin needs to be extracted into a different module

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340146] [NEW] Return of the cinder traces in the tests output

2014-07-10 Thread Julie Pichon
Public bug reported:

More traces in the log due to improperly mocked API calls. See also: bug
1335082.

DEBUG:cinderclient.client:Connection refused: 
HTTPConnectionPool(host='public.nova.example.com', port=8776): Max retries 
exceeded with url: /v1/backups/detail (Caused by : 
[Errno -2] Name or service not known)
.DEBUG:cinderclient.client:Connection refused: 
HTTPConnectionPool(host='public.nova.example.com', port=8776): Max retries 
exceeded with url: /v1/backups/detail (Caused by : 
[Errno -2] Name or service not known)

Guilty tests, after running in verbose mode:
test_encryption_false 
(openstack_dashboard.dashboards.project.volumes.volumes.tests.VolumeViewTests) 
... 
test_encryption_true 
(openstack_dashboard.dashboards.project.volumes.volumes.tests.VolumeViewTests) 
...

** Affects: horizon
 Importance: Medium
 Assignee: Julie Pichon (jpichon)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340146

Title:
  Return of the cinder traces in the tests output

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  More traces in the log due to improperly mocked API calls. See also:
  bug 1335082.

  DEBUG:cinderclient.client:Connection refused: 
HTTPConnectionPool(host='public.nova.example.com', port=8776): Max retries 
exceeded with url: /v1/backups/detail (Caused by : 
[Errno -2] Name or service not known)
  .DEBUG:cinderclient.client:Connection refused: 
HTTPConnectionPool(host='public.nova.example.com', port=8776): Max retries 
exceeded with url: /v1/backups/detail (Caused by : 
[Errno -2] Name or service not known)

  Guilty tests, after running in verbose mode:
  test_encryption_false 
(openstack_dashboard.dashboards.project.volumes.volumes.tests.VolumeViewTests) 
... 
  test_encryption_true 
(openstack_dashboard.dashboards.project.volumes.volumes.tests.VolumeViewTests) 
...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340149] [NEW] EC2 tests fail with BotoServerError: 500 Internal Server Error

2014-07-10 Thread Dmitry Mescheryakov
Public bug reported:

Tempest tests fails in check job. The full log could be found at
http://logs.openstack.org/46/104646/2/check/check-tempest-dsvm-postgres-
full/a58b623/logs/screen-n-api.txt.gz

2014-07-10 01:10:53.190 | Captured traceback:
2014-07-10 01:10:53.190 | ~~~
2014-07-10 01:10:53.190 | Traceback (most recent call last):
2014-07-10 01:10:53.190 |   File 
"tempest/thirdparty/boto/test_ec2_security_groups.py", line 32, in 
test_create_authorize_security_group
2014-07-10 01:10:53.190 | group_description)
2014-07-10 01:10:53.190 |   File "tempest/services/botoclients.py", line 
82, in func
2014-07-10 01:10:53.191 | return getattr(conn, name)(*args, **kwargs)
2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 2976, in create_security_group
2014-07-10 01:10:53.191 | SecurityGroup, verb='POST')
2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1164, in get_object
2014-07-10 01:10:53.191 | response = self.make_request(action, params, 
path, verb)
2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1090, in make_request
2014-07-10 01:10:53.191 | return self._mexe(http_request)
2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1003, in _mexe
2014-07-10 01:10:53.191 | raise BotoServerError(response.status, 
response.reason, body)
2014-07-10 01:10:53.191 | BotoServerError: BotoServerError: 500 Internal 
Server Error
2014-07-10 01:10:53.191 | 
2014-07-10 01:10:53.192 | 
OperationalErrorUnknown error 
occurred.req-178ab1f9-e9f6-4b9e-8a3e-56d2bd78c5c2

The failure seems to be similar to one in 
https://bugs.launchpad.net/nova/+bug/1315580 , but the issue is not related 
strictly to floating IPs. Here the following tests failed:
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_get_delete
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_volume_from_snapshot
tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group

The entries of the following kind in screen-n-api log seems to be related to 
the failure:
2014-07-10 01:07:39.576 ERROR nova.api.ec2 
[req-9bc0089f-9257-401b-bf8e-27df3b45bb4b EC2VolumesTest-1485461725 
EC2VolumesTest-529577160] Unexpected OperationalError raised: 
(OperationalError) asynchronous connection failed None None

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340149

Title:
  EC2 tests fail with BotoServerError: 500 Internal Server Error

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest tests fails in check job. The full log could be found at
  http://logs.openstack.org/46/104646/2/check/check-tempest-dsvm-
  postgres-full/a58b623/logs/screen-n-api.txt.gz

  2014-07-10 01:10:53.190 | Captured traceback:
  2014-07-10 01:10:53.190 | ~~~
  2014-07-10 01:10:53.190 | Traceback (most recent call last):
  2014-07-10 01:10:53.190 |   File 
"tempest/thirdparty/boto/test_ec2_security_groups.py", line 32, in 
test_create_authorize_security_group
  2014-07-10 01:10:53.190 | group_description)
  2014-07-10 01:10:53.190 |   File "tempest/services/botoclients.py", line 
82, in func
  2014-07-10 01:10:53.191 | return getattr(conn, name)(*args, **kwargs)
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 2976, in create_security_group
  2014-07-10 01:10:53.191 | SecurityGroup, verb='POST')
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1164, in get_object
  2014-07-10 01:10:53.191 | response = self.make_request(action, 
params, path, verb)
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1090, in make_request
  2014-07-10 01:10:53.191 | return self._mexe(http_request)
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1003, in _mexe
  2014-07-10 01:10:53.191 | raise BotoServerError(response.status, 
response.reason, body)
  2014-07-10 01:10:53.191 | BotoServerError: BotoServerError: 500 Internal 
Server Error
  2014-07-10 01:10:53.191 | 
  2014-07-10 01:10:53.192 | 
OperationalErrorUnknown error 
occurred.req-178ab1f9-e9f6-4b9e-8a3e-56d2bd78c5c2

  The failure seems to 

[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-07-10 Thread Roman Podoliaka
** Changed in: fuel/5.1.x
   Status: In Progress => Fix Committed

** Changed in: mos/5.1.x
   Status: In Progress => Fix Committed

** Changed in: mos/5.0.x
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Fuel for OpenStack 5.1.x series:
  Fix Committed
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in Mirantis OpenStack 6.0.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launch

[Yahoo-eng-team] [Bug 1340040] Re: Can't able to delete pseudo folder under container

2014-07-10 Thread Julie Pichon
I'm opening a task against Swift as well since the description seems
mostly made up of Swift commands.

Are there objects in test1? If I remember correctly, pseudo-folders are
"created" based on "/" in object names but don't actually exist. When
you delete objects within test1/ does the pseudo-directory eventually
gets removed from the list?

** Also affects: swift
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340040

Title:
  Can't able to delete pseudo folder under container

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  Hi all,

  I can't able to delete pseudo folders created under Container. I
  installed openstack using packstack all-in-one in a CentOS machine.

  I can create multiple containers and upload objects in to it without a
  problem, but when I create a pseudo folder, I click on that dashboard
  goes to "something went wrong".  I fix that temporarily using the fix
  here https://bugs.launchpad.net/horizon/+bug/131

  Now I can create subfolders and upload objects in to that file, but I can't 
able to delete any of these pseudo folders. 
  When I try to delete pseudo folder named (for eg  test), I'm getting error 
Error: You are not allowed to delete object:test.

  When using cli, I get this 
  [root@icestack ~(keystone_admin)]# swift list test
  test1/
  [root@icestack ~(keystone_admin)]#
  [root@icestack ~(keystone_admin)]#
  [root@icestack ~(keystone_admin)]# swift delete test test1
  Object 'test/test1' not found

  I think it is a known issue, any workaround for this problem.

  Thanks & Regards,
  Anand TS

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340159] [NEW] Resize to zero disk flavor is not allowed

2014-07-10 Thread Shuangtai Tian
Public bug reported:

When old flavor's root_gb is not equal 0 and new flavor's root_gb is 0,
the resize() in nova.compute.api will raise CannotResizeDisk.

https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2368

def resize(self, context, instance, flavor_id=None,
if not flavor_id:
LOG.debug("flavor_id is None. Assuming migration.",
  instance=instance)
new_instance_type = current_instance_type
else:
new_instance_type = flavors.get_flavor_by_flavor_id(
flavor_id, read_deleted="no")
if (new_instance_type.get('root_gb') == 0 and
current_instance_type.get('root_gb') != 0):
reason = _('Resize to zero disk flavor is not allowed.')
raise exception.CannotResizeDisk(reason=reason)

** Affects: nova
 Importance: Undecided
 Assignee: Shuangtai Tian (shuangtai-tian)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Shuangtai Tian (shuangtai-tian)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340159

Title:
  Resize to zero disk flavor is not allowed

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When old flavor's root_gb is not equal 0 and new flavor's root_gb is
  0,the resize() in nova.compute.api will raise CannotResizeDisk.

  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2368

  def resize(self, context, instance, flavor_id=None,
  if not flavor_id:
  LOG.debug("flavor_id is None. Assuming migration.",
instance=instance)
  new_instance_type = current_instance_type
  else:
  new_instance_type = flavors.get_flavor_by_flavor_id(
  flavor_id, read_deleted="no")
  if (new_instance_type.get('root_gb') == 0 and
  current_instance_type.get('root_gb') != 0):
  reason = _('Resize to zero disk flavor is not allowed.')
  raise exception.CannotResizeDisk(reason=reason)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340169] [NEW] failed to attach volumes to instances after configuration change & services restart

2014-07-10 Thread Yogev Rabl
Public bug reported:

Description of problem:
The attachment of volumes failed with the errors that are available in the log 
file attached. Prior to the error I was running 8 active instances, made a 
configuration change - increased the number of workers in the Cinder, Nova & 
Glance services, then restarted the services.   

Ran the command:
# nova volume-attach 6aac6fb6-ef22-48b0-b6ac-99bc94787422 
57edbc5c-8a1f-49f2-b8bf-280ab857222d auto

+--+--+
| Property | Value|
+--+--+
| device   | /dev/vdc |
| id   | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
| serverId | 6aac6fb6-ef22-48b0-b6ac-99bc94787422 |
| volumeId | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
+--+--+

cinder list output:
 
+--+---+---+--+-+--+-+
|  ID  |   Status  |  Display Name | Size | 
Volume Type | Bootable | Attached to |
+--+---+---+--+-+--+-+
| 57edbc5c-8a1f-49f2-b8bf-280ab857222d | available |   dust-bowl   | 100  | 
None|  false   | |
| 731a118d-7bd6-4538-a3b2-60543179281e | available | bowl-the-dust | 100  | 
None|  false   | |
+--+---+---+--+-+--+-+


Version-Release number of selected component (if applicable):
python-cinder-2014.1-7.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-cinder-2014.1-7.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
python-cinderclient-1.0.9-1.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Launch instances
2. Increase the number of workers for the Cinder, Nova & Glance
3. Create a volume
4. Attach the volume to the instance.

Actual results:
The attachment process fail.

Expected results:
The volume should be attached to the instance.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "volume-attach-fail.log"
   
https://bugs.launchpad.net/bugs/1340169/+attachment/4149555/+files/volume-attach-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340169

Title:
  failed to attach volumes to instances after configuration change &
  services restart

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  The attachment of volumes failed with the errors that are available in the 
log file attached. Prior to the error I was running 8 active instances, made a 
configuration change - increased the number of workers in the Cinder, Nova & 
Glance services, then restarted the services.   

  Ran the command:
  # nova volume-attach 6aac6fb6-ef22-48b0-b6ac-99bc94787422 
57edbc5c-8a1f-49f2-b8bf-280ab857222d auto

  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdc |
  | id   | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
  | serverId | 6aac6fb6-ef22-48b0-b6ac-99bc94787422 |
  | volumeId | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
  +--+--+

  cinder list output:
   
+--+---+---+--+-+--+-+
  |  ID  |   Status  |  Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+---+--+-+--+-+
  | 57edbc5c-8a1f-49f2-b8bf-280ab857222d | available |   dust-bowl   | 100  |   
  None|  false   | |
  | 731a118d-7bd6-4538-a3b2-60543179281e | available | bowl-the-dust | 100  |   
  None|  false   | |
  
+--+---+---+--+-+--+-+

  
  Version-Release number of selected component (if applicable):
  python-cinder-2014.1-7.el7ost.noarch
  openstack-nova-network-2014.1-7.el7ost.noarch
  python-novaclient-2.17.0-2.el7ost.noarch
  openstack-cinder-2014.1-7.el7ost.noarch
  openstack-nova-common-2014.1-7.el7ost.noarch
  python-cinderc

[Yahoo-eng-team] [Bug 1340167] [NEW] VMware: Horizon reports incorrect message for PAUSE instance

2014-07-10 Thread Mayank
Public bug reported:

When I pause an instance hosted on VMware cluster, it shows 
SUCCESS: Paused Instance: ; 
in horizon portal and nothing happens (instance does not go to Pause state)

In nova-compute log it shows: pause not supported for vmwareapi

2014-07-10 06:53:37.212 ERROR oslo.messaging.rpc.dispatcher 
[req-f8159224-a1e2-4271-84d8-eea2edeaaee1 admin demo] Exception during message 
handling: pause not supported for vmwareapi
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher payload)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 285, in decorated_function
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher pass
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 271, in decorated_function
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 335, in decorated_function
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher function(self, 
context, *args, **kwargs)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 313, in decorated_function
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 301, in decorated_function
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 3680, in pause_instance
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
self.driver.pause(instance)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 678, in pause
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
_vmops.pause(instance)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 938, in pause
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher raise 
NotImplementedError(msg)
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher 
NotImplementedError: pause not supported for vmwareapi
2014-07-10 06:53:37.212 TRACE oslo.messaging.rpc.dispatcher
2014-07-10 06:53:37.214 ERROR oslo.messaging._drivers.common 
[req-f8159224-a1e2-4271-84d8-eea2edeaaee1 admin demo] Returning exception pause 
not supported for vmwareapi to caller


This information is quite misleading for the user.
The message should come as "Pause not supported for vmwareapi"

** Affects: horizon
 Importance: Undecided
 Assignee: May

[Yahoo-eng-team] [Bug 1213149] Re: boot an instance will fail if use "--hint group=XXXXX"

2014-07-10 Thread Davanum Srinivas (DIMS)
*** This bug is a duplicate of bug 1303360 ***
https://bugs.launchpad.net/bugs/1303360

** This bug has been marked a duplicate of bug 1303360
   GroupAntiAffinityFilter scheduler hint still doesn't work

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213149

Title:
  boot an instance will fail if use "--hint group=X"

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When use command "nova boot --image  --flavor  --hint 
group= vm1", booting will fail.
  After debugging, I found that there's a code error in 
nova/scheduler/filter_scheduler.py#174

  173values = request_spec['instance_properties']['system_metadata']
  174values.update({'group': group})
  175values = {'system_metadata': values}

  values is not dict, can not use `update` method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340194] [NEW] Removed security group rules are still persistent on instances

2014-07-10 Thread chinasubbareddy
Public bug reported:

Even after removing the scurity group rules , able to do the operations
like ssh/ping on vms.

Erlier to this we added rules to allow ssh and ping , and then removed
those rules.

Below is log

 nova list
+--+-+++-+-+
| ID   | Name| Status | Task State | 
Power State | Networks|
+--+-+++-+-+
| a1426d0a-07df-40c8-b883-3f5fb34bbec2 | testvm1-az1 | ACTIVE | None   | 
Running | Net1=2.2.2.2, 10.233.53.105 |
| 329b0493-e1f9-4baa-bfc9-5ecf9c2d4687 | testvm1-az2 | ACTIVE | None   | 
Running | Net1=2.2.2.4|
+--+-+++-+-+
root@controller:~# nova show a1426d0a-07df-40c8-b883-3f5fb34bbec2
+--+--+
| Property | Value  
  |
+--+--+
| status   | ACTIVE 
  |
| updated  | 2014-07-03T06:34:31Z   
  |
| OS-EXT-STS:task_state| None   
  |
| OS-EXT-SRV-ATTR:host | compute1   
  |
| key_name | None   
  |
| image| CirrOS 0.3.1 
(ea93e47e-558e-4baf-bea1-777b4814ca5d)  |
| hostId   | 
64a50db012ab0b483697b85be03d02d66535ff2656170b6c8fb9a8f8 |
| Net1 network | 2.2.2.2, 10.233.53.105 
  |
| OS-EXT-STS:vm_state  | active 
  |
| OS-EXT-SRV-ATTR:instance_name| instance-0018  
  |
| OS-SRV-USG:launched_at   | 2014-07-03T06:34:31.00 
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1   
  |
| flavor   | myF1 (6)   
  |
| id   | a1426d0a-07df-40c8-b883-3f5fb34bbec2   
  |
| security_groups  | [{u'name': u'default'}]
  | --> using default secgroup.
| OS-SRV-USG:terminated_at | None   
  |
| user_id  | 0dc64e9cfb07442b8d6ce7d518200d06   
  |
| name | testvm1-az1
  |
| created  | 2014-07-03T06:33:54Z   
  |
| tenant_id| 8a5dee0f17204539a73987d6a8f255cd   
  |
| OS-DCF:diskConfig| MANUAL 
  |
| metadata | {} 
  |
| os-extended-volumes:volumes_attached | [] 
  |
| accessIPv4   |
  |
| accessIPv6   |
  |
| progress | 0  
  |
| OS-EXT-STS:power_state   | 1  
  |
| OS-EXT-AZ:availability_zone  | azhyd1 
  |
| config_drive |
  |
+--+--+
root@controller:~# nova secgroup-list-rules default
+-+---+-+--+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+--+--+
| |   | |  | default  |
| |   | |  | default  |
+-+---+-+--+--+
root@controller:~# ip netns exec qdhcp-acf1b559-0602-461f-8b86-9e7c5a7cec80 
ping 2.2.2.2
PING 2.2.2.2 (2.2.2.2) 56(84) bytes of data.
64 bytes from 2.2.2.2: i

[Yahoo-eng-team] [Bug 1340197] [NEW] Horizon doesn't notify when fail to attach a volume

2014-07-10 Thread Yogev Rabl
Public bug reported:

Description of problem:
Horizon doesn't notify when the attachment of volume process fail with Errors.
The nova-compute log show errors during the process of the volume attachment 
but the Horizon doesn't present the failure and the error. 

Version-Release number of selected component (if applicable):
python-django-horizon-2014.1-7.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch


How reproducible:
100%

Steps to Reproduce:
1. Follow the step of the bug: https://bugs.launchpad.net/nova/+bug/1340169
2. In the Horizon try to attach a volume

Actual results:
The Horizon shows an info message: 
Info: Attaching volume bowl-the-dust to instance 
cougar-01-fe5510a5-c50c-46ee-9d71-6f8e41a58ecc on /dev/vdc.

The volume status changes to 'attaching' then change back to available.

Expected results:
An error should appear saying "Error: the volume attachment failed"

Additional info:
The Horizon log is attached.
The nova-compute log with the volume attachment error is available in the bug 
https://bugs.launchpad.net/nova/+bug/1340169

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Horizon log"
   
https://bugs.launchpad.net/bugs/1340197/+attachment/4149607/+files/horizon-volume-attach-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340197

Title:
  Horizon doesn't notify when fail to attach a volume

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  Horizon doesn't notify when the attachment of volume process fail with Errors.
  The nova-compute log show errors during the process of the volume attachment 
but the Horizon doesn't present the failure and the error. 

  Version-Release number of selected component (if applicable):
  python-django-horizon-2014.1-7.el7ost.noarch
  openstack-nova-network-2014.1-7.el7ost.noarch
  python-novaclient-2.17.0-2.el7ost.noarch
  openstack-nova-common-2014.1-7.el7ost.noarch
  openstack-nova-compute-2014.1-7.el7ost.noarch
  openstack-nova-conductor-2014.1-7.el7ost.noarch
  openstack-nova-scheduler-2014.1-7.el7ost.noarch
  openstack-nova-api-2014.1-7.el7ost.noarch
  openstack-nova-cert-2014.1-7.el7ost.noarch
  openstack-nova-novncproxy-2014.1-7.el7ost.noarch
  python-nova-2014.1-7.el7ost.noarch
  openstack-nova-console-2014.1-7.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. Follow the step of the bug: https://bugs.launchpad.net/nova/+bug/1340169
  2. In the Horizon try to attach a volume

  Actual results:
  The Horizon shows an info message: 
  Info: Attaching volume bowl-the-dust to instance 
cougar-01-fe5510a5-c50c-46ee-9d71-6f8e41a58ecc on /dev/vdc.

  The volume status changes to 'attaching' then change back to
  available.

  Expected results:
  An error should appear saying "Error: the volume attachment failed"

  Additional info:
  The Horizon log is attached.
  The nova-compute log with the volume attachment error is available in the bug 
https://bugs.launchpad.net/nova/+bug/1340169

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340210] [NEW] NSX: _convert_to_nsx_transport_zones should not be in the plugin class

2014-07-10 Thread Salvatore Orlando
Public bug reported:

This is clearly a utility function, and should be therefore moved into
the neutron.plugins.vmware.common.nsx_utils module.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340210

Title:
  NSX: _convert_to_nsx_transport_zones should not be in the plugin class

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is clearly a utility function, and should be therefore moved into
  the neutron.plugins.vmware.common.nsx_utils module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276530] Re: memorycache#get scans the whole cache for expired items on every fetch

2014-07-10 Thread Ben Nemec
The in-memory cache isn't intended for production use anyway, so we
don't want to spend a bunch of time optimizing it.

** Changed in: oslo
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276530

Title:
  memorycache#get scans the whole cache for expired items on every fetch

Status in OpenStack Compute (Nova):
  Triaged
Status in Oslo - a Library of Common OpenStack Code:
  Won't Fix

Bug description:
  Every time an item is fetched from the memory cache, the whole cache
  is scanned for expired items:

  
https://github.com/openstack/nova/blob/master/nova/openstack/common/memorycache.py#L63-L67

  This is not the right place to expire items - a large cache can become
  slow.  There should be a more sensible approach to the (difficult)
  problem of cache expiry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340040] Re: Can't able to delete pseudo folder under container

2014-07-10 Thread Samuel Merritt
When you create pseudo-directory objects in Swift, they are stored as
plain old objects whose names happen to contain slashes. You're trying
to delete the object "test1/" (with a slash) by running "swift delete
test test1" (without a slash), and those are two different names, so
Swift correctly returns a 404.

In the second comment, you've forgotten the container name in the "swift
delete" command, so it's not working.

I think Swift is performing as intended here.

** Changed in: swift
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340040

Title:
  Can't able to delete pseudo folder under container

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Object Storage (Swift):
  Invalid

Bug description:
  Hi all,

  I can't able to delete pseudo folders created under Container. I
  installed openstack using packstack all-in-one in a CentOS machine.

  I can create multiple containers and upload objects in to it without a
  problem, but when I create a pseudo folder, I click on that dashboard
  goes to "something went wrong".  I fix that temporarily using the fix
  here https://bugs.launchpad.net/horizon/+bug/131

  Now I can create subfolders and upload objects in to that file, but I can't 
able to delete any of these pseudo folders. 
  When I try to delete pseudo folder named (for eg  test), I'm getting error 
Error: You are not allowed to delete object:test.

  When using cli, I get this 
  [root@icestack ~(keystone_admin)]# swift list test
  test1/
  [root@icestack ~(keystone_admin)]#
  [root@icestack ~(keystone_admin)]#
  [root@icestack ~(keystone_admin)]# swift delete test test1
  Object 'test/test1' not found

  I think it is a known issue, any workaround for this problem.

  Thanks & Regards,
  Anand TS

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212428] Re: compute_node_get_all slow as molasses

2014-07-10 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1212428

Title:
  compute_node_get_all slow as molasses

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  compute_node_get_all() joins compute_node_stats and with a large
  number of compute nodes and a moderate number of stats entries per
  compute node, this is extremely slow.

  http://paste.openstack.org/show/44162/
  http://paste.openstack.org/show/44143/

  I believe the problem stems from the fact that each compute node stat
  is contained in its own row.  With 16K compute nodes and 'x' stats
  being kept per node, that translates into 16*x rows being returned
  from sql server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1212428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291791] Re: nova-manage agent create should do param check

2014-07-10 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291791

Title:
  nova-manage agent create should do param check

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  [root@xxx ~]# nova-manage agent create --os linux --architecture
  x86 --version 1.0 --url a...@sina.com --md5hash
  
abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc

  /usr/lib64/python2.6/site-packages/sqlalchemy/engine/default.py:331: Warning: 
Data truncated for column 'md5hash' at row 1
cursor.execute(statement, parameters)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1204594] Re: conductor needs entire objects only to use the id value

2014-07-10 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1204594

Title:
  conductor needs entire objects only to use the id value

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  nova/conductor/manager.py  does things like this:

  def compute_node_update(self, context, node, values, prune_stats=False):
  result = self.db.compute_node_update(context, node['id'], values,
   prune_stats)
  return jsonutils.to_primitive(result)

  
  Where the conductor API asks for an entire node object, but only uses the id 
value in it.  Meaning the rest of the data that is being sent is just wasting 
space.

  The following conductor methods do this:
  migration_update
  aggregate_host_add
  aggregate_host_delete
  aggregate_metadata_add
  aggregate_metadata_delete
  security_group_rule_get_by_security_group
  compute_node_update
  compute_node_delete
  service_update

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1204594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258625] Re: Need some kind of 'auto' boolean column in the Service table

2014-07-10 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258625

Title:
  Need some kind of 'auto' boolean column in the Service table

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Bug 1250049 reported a problem with automatically disabling/enabling a
  host via the libvirt driver, but rather than fix it the right way,
  i.e. add a new column to the Service table which indicates if an admin
  intentionally disabled the host or if nova detected a fail and did it
  automatically, a hack was done instead to prefix the 'disabled_reason'
  with "AUTO:" and build some logic in the driver around that.

  The problem with that approach is the ComputeFilter in the scheduler
  can't perform any kind of retry logic around that if needed, i.e. bug
  1257644.

  Right now if the ComputeFilter encounters a disabled host, it just
  logs it at debug level and skips it.  If the host was automatically
  disabled because of a connection fail, we should at least log that as
  a warning in the scheduler (like we do now for hosts that haven't
  checked in for awhile) - or possibly build some retry logic around
  that to make it more robust in case the connection fail is just a
  hiccup that quickly resolves itself.

  One could maybe argue that some kind of connection retry logic could
  be built into the libvirt driver instead, I wouldn't be against that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226036] Re: Instance usage audit should be based on deleted_at not terminated_at

2014-07-10 Thread Joe Gordon
patch was abandoned.

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226036

Title:
  Instance usage audit should be based on deleted_at not terminated_at

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Currently the DB query used by the instance usage audit,
  instance_get_active_by_window_joined()  uses the terminated_at
  timestamp.

  terminated_at is normally set as part of the instance deletion
  processing, however there are cases where an exception at the wrong
  time could prevent terminated_at from being set.   Also the recent bug
  fixed by this change https://review.openstack.org/#/c/42534/  missed
  out this update altogether, so instances created by a system with that
  bug in-situ will now have large numbers of instances that will be
  continually reported as existing even though the entry in the DB is
  deleted.

  Given that instance_usage_audit is meant to report on instances that
  from the DB perspective existed on a host in the previous audit period
  it would be more consistent to change
  instance_get_active_by_window_joined() to use deleted_at - which is
  set directly by the DB layer when the entry is deleted.

  This would mean instances which have terminated_at set but not deleted
  would be reported as existing - which is also more consistent with the
  intended behviour of the audit.

  The only case where terminated_at is not set as part of deletion is
  already reported as a separate bug:
  https://bugs.launchpad.net/nova/+bug/1189554

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261827] Re: Bare metal virt driver depends libvirt volume driver code

2014-07-10 Thread Joe Gordon
Since we are in the process of deprecating nova maremetal, we should
focus work on ironic instead.

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261827

Title:
   Bare metal virt driver depends libvirt volume driver code

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  The code within each virt driver's directory nova/virt// is
  considered to be private to that virt driver.

  The baremetal driver, however, imports and depends on libvirt volume
  driver code

  $ grep libvirt volume_driver.py
  from nova.virt.libvirt import utils as libvirt_utils
  CONF.import_opt('volume_drivers', 'nova.virt.libvirt.driver', group='libvirt')
  self._initiator = libvirt_utils.get_iscsi_initiator()
  """The VolumeDriver delegates to nova.virt.libvirt.volume."""
  for driver_str in CONF.libvirt.volume_drivers:

  If this code truly is useful to multiple drivers, then it should be in
  common shared code. Virt drivers should never directly use each
  other's private code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339879] Re: gate tests fail due to intermittent failures

2014-07-10 Thread nikhil komawar
Thanks Clark, for the feedback. Seems like nova/glance bug, removing it
from infra bug list.

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339879

Title:
  gate tests fail due to intermittent failures

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Tests in the Glance gate seem to be failing due to the following
  error:-

  Details: The server has either erred or is incapable of performing the
  requested operation.

  2014-07-09 16:38:38.863 | ==
  2014-07-09 16:38:38.863 | Failed 1 tests - output below:
  2014-07-09 16:38:38.863 | ==
  2014-07-09 16:38:38.863 | 
  2014-07-09 16:38:38.863 | 
tempest.api.compute.images.test_images.ImagesTestXML.test_delete_saving_image[gate]
  2014-07-09 16:38:38.864 | 
---
  2014-07-09 16:38:38.864 | 
  2014-07-09 16:38:38.864 | Captured traceback:
  2014-07-09 16:38:38.864 | ~~~
  2014-07-09 16:38:38.864 | Traceback (most recent call last):
  2014-07-09 16:38:38.864 |   File 
"tempest/api/compute/images/test_images.py", line 42, in 
test_delete_saving_image
  2014-07-09 16:38:38.864 | resp, body = 
self.client.delete_image(image['id'])
  2014-07-09 16:38:38.864 |   File 
"tempest/services/compute/xml/images_client.py", line 136, in delete_image
  2014-07-09 16:38:38.864 | return self.delete("images/%s" % 
str(image_id))
  2014-07-09 16:38:38.864 |   File "tempest/common/rest_client.py", line 
224, in delete
  2014-07-09 16:38:38.864 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-07-09 16:38:38.865 |   File "tempest/common/rest_client.py", line 
430, in request
  2014-07-09 16:38:38.865 | resp, resp_body)
  2014-07-09 16:38:38.865 |   File "tempest/common/rest_client.py", line 
526, in _error_checker
  2014-07-09 16:38:38.865 | raise exceptions.ServerFault(message)
  2014-07-09 16:38:38.865 | ServerFault: Got server fault
  2014-07-09 16:38:38.865 | Details: The server has either erred or is 
incapable of performing the requested operation.
  2014-07-09 16:38:38.865 | 


  ref. http://logs.openstack.org/51/105751/1/check/check-tempest-dsvm-
  full/5f646ca/console.html#_2014-07-09_16_38_38_863

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261826] Re: Bare metal virt driver depends libvirt image caching code

2014-07-10 Thread Joe Gordon
Since we are in the process of deprecating nova maremetal, we should
focus work on ironic instead.

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261826

Title:
  Bare metal virt driver depends libvirt image caching code

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  The code within each virt driver's directory nova/virt// is
  considered to be private to that virt driver.

  The baremetal driver, however, imports and depends on libvirt image
  caching code

  $ grep imagecache driver.py
  from nova.virt.libvirt import imagecache
  "has_imagecache": True,
  self.image_cache_manager = imagecache.ImageCacheManager()

  $ grep libvirt utils.py
  from nova.virt.libvirt import utils as libvirt_utils
  libvirt_utils.fetch_image(context, target, image_id,

  
  If this code truly is useful to multiple drivers, then it should be in common 
shared code. Virt drivers should never directly use each other's private code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184473] Re: no way to resume a baremetal deployment after restarting n-cpu

2014-07-10 Thread Joe Gordon
since we are in the process of deprecating nova baremetal, I don't think
we want to fix this.

** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1184473

Title:
  no way to resume a baremetal deployment after restarting n-cpu

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  If the nova-compute process is terminated while the baremetal PXE
  driver is waiting inside activate_node() for baremetal-deploy-helper
  to finish copying the image, there is no way to resume the deployment.
  Currently, recovery from this situation requires that the instance and
  the baremetal node be deleted, possible manual editing of the nova
  database, and waiting for the compute_manager to trigger
  update_available_resource and reap the dead compute_node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1184473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239481] Re: nova baremetal requires manual neutron setup for metadata access

2014-07-10 Thread Joe Gordon
Since we are in the process of deprecating nova maremetal, we should
focus work on ironic instead.


** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239481

Title:
  nova baremetal requires manual neutron setup for metadata access

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  Opinion
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  a subnet setup with host routes can use a bare metal gateway as long as there 
is a metadata server on the same network:
  neutron subnet-create ... (network, dhcp settings etc) host_routes 
type=dict list=true destination=169.254.169.254/32,nexthop= --gateway_ip=

  But this requires manual configuration - it would be nice if nova
  could configure this as part of bringing up the network for a given
  node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1239481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177655] Re: kernel boot command line for baremetal assumes block device UUID is correct root

2014-07-10 Thread Joe Gordon
Since we are in the process of deprecating nova maremetal, we should
focus work on ironic instead.

** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177655

Title:
  kernel boot command line for baremetal assumes block device UUID is
  correct root

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  It does this by checking blkid on the device it wrote the image too,
  but if the image contains lvm or any other translation layer, this
  won't be the actual UUID to use.

  We may be better off with a LABEL that we can document every image to
  have (e.g. cloudimg-rootfs)

  this can be worked around for now by editing the template

  root=${ROOT} -> root=LABEL=cloudimg-rootfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1177655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340149] Re: EC2 tests fail with BotoServerError: 500 Internal Server Error

2014-07-10 Thread Joe Gordon
*** This bug is a duplicate of bug 1338841 ***
https://bugs.launchpad.net/bugs/1338841

** This bug has been marked a duplicate of bug 1338841
   asynchronous connection failed in postgresql jobs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340149

Title:
  EC2 tests fail with BotoServerError: 500 Internal Server Error

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest tests fails in check job. The full log could be found at
  http://logs.openstack.org/46/104646/2/check/check-tempest-dsvm-
  postgres-full/a58b623/logs/screen-n-api.txt.gz

  2014-07-10 01:10:53.190 | Captured traceback:
  2014-07-10 01:10:53.190 | ~~~
  2014-07-10 01:10:53.190 | Traceback (most recent call last):
  2014-07-10 01:10:53.190 |   File 
"tempest/thirdparty/boto/test_ec2_security_groups.py", line 32, in 
test_create_authorize_security_group
  2014-07-10 01:10:53.190 | group_description)
  2014-07-10 01:10:53.190 |   File "tempest/services/botoclients.py", line 
82, in func
  2014-07-10 01:10:53.191 | return getattr(conn, name)(*args, **kwargs)
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 2976, in create_security_group
  2014-07-10 01:10:53.191 | SecurityGroup, verb='POST')
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1164, in get_object
  2014-07-10 01:10:53.191 | response = self.make_request(action, 
params, path, verb)
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1090, in make_request
  2014-07-10 01:10:53.191 | return self._mexe(http_request)
  2014-07-10 01:10:53.191 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/boto/connection.py",
 line 1003, in _mexe
  2014-07-10 01:10:53.191 | raise BotoServerError(response.status, 
response.reason, body)
  2014-07-10 01:10:53.191 | BotoServerError: BotoServerError: 500 Internal 
Server Error
  2014-07-10 01:10:53.191 | 
  2014-07-10 01:10:53.192 | 
OperationalErrorUnknown error 
occurred.req-178ab1f9-e9f6-4b9e-8a3e-56d2bd78c5c2

  The failure seems to be similar to one in 
https://bugs.launchpad.net/nova/+bug/1315580 , but the issue is not related 
strictly to floating IPs. Here the following tests failed:
  tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_get_delete
  
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_volume_from_snapshot
  
tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group

  The entries of the following kind in screen-n-api log seems to be related to 
the failure:
  2014-07-10 01:07:39.576 ERROR nova.api.ec2 
[req-9bc0089f-9257-401b-bf8e-27df3b45bb4b EC2VolumesTest-1485461725 
EC2VolumesTest-529577160] Unexpected OperationalError raised: 
(OperationalError) asynchronous connection failed None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248022] Re: Nova scheduler not updated immediately when a baremetal node is added

2014-07-10 Thread Joe Gordon
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248022

Title:
  Nova scheduler not updated immediately when a baremetal node is added

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  In compute manager, the update_available_resource() periodic task is
  responsible for updating the scheduler's knowledge of baremetal nodes:

  @periodic_task.periodic_task
  def update_available_resource(self, context):
  ...
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)

  update_available_resource() is also called at service startup

  This means that you have to wait up to 60 seconds for a node to become
  available

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1248022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248422] Re: nova/baremetal-compute-ipmi.filters issues

2014-07-10 Thread Joe Gordon
Since we are in the process of deprecating nova maremetal, we should
focus work on ironic instead.


** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248422

Title:
  nova/baremetal-compute-ipmi.filters issues

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  From ttx, issues in nova/baremetal-compute-ipmi.filters

   * allows ipmitool, but ipmitool isn't called as root
   * allows kill, but kill is used against a process which is not run as root

  These are the only two filters in the file, so we should be able to
  just remove the file.

  We also need to remove run_as_root from:

  utils.execute('kill', '-TERM', str(console_pid),
run_as_root=True,
check_exit_code=[0, 99])

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1248422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212418] Re: SQLAlchemy performs poorly on large result sets

2014-07-10 Thread Joe Gordon
with the addition of Mike Bayer (sqlalchemy developer), it looks like
there is hope for us on this one.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1212418

Title:
  SQLAlchemy performs poorly on large result sets

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  I'm not exactly sure what the interaction is that causes this problem,
  but it is evidenced very nicely by calling compute_node_get_all. While
  the MySQL query returns this data in about 2 seconds, it takes another
  53 seconds for SQLAlchemy to shove all those results into a list.

  Here's the script being run:

  http://paste.openstack.org/show/44079/

  Here are results:

  http://paste.openstack.org/show/44143/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1212418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316173] Re: nova.virt.imagehandler not used

2014-07-10 Thread Joe Gordon
this code has been removed

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316173

Title:
  nova.virt.imagehandler not used

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  jaypipes@cranky:~/repos/openstack/nova$ ack-grep load_image_handlers 
--ignore-dir tests
  nova/virt/imagehandler/__init__.py
  70:def load_image_handlers(driver):

  jaypipes@cranky:~/repos/openstack/nova$ ack-grep handle_image --ignore-dir 
tests
  nova/virt/imagehandler/__init__.py
  116:def handle_image(context=None, image_id=None,

  AFAICT, all the code in nova.virt.imagehandlers is unused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254705] Re: _rule_dict_last_step in ec2/cloud.py doesn't respect Security Group API

2014-07-10 Thread Joe Gordon
this has been fixed.

** Changed in: nova
   Status: New => Fix Released

** Changed in: nova
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254705

Title:
  _rule_dict_last_step in ec2/cloud.py doesn't respect Security Group
  API

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Maybe this is a bug.

  I observed that the _rule_dict_last_step function in ec2/cloud.py
  always use the nova included database funktions to resolve
  securitygroup names , but doesn't call the given security api instance
  if security groups are resolved. The potential error is fixed by this
  patch -- which is maybe incomplete:

  --- api/ec2/cloud.py_old  2013-11-25 13:27:04.036359251 +0100
  +++ api/ec2/cloud.py  2013-11-25 13:27:51.308549582 +0100
  @@ -590,9 +590,8 @@
   source_project_id = self._get_source_project_id(context,
   source_security_group_owner_id)
   
  -source_security_group = db.security_group_get_by_name(
  +source_security_group = self.security_group_api.get(
   context.elevated(),
  -source_project_id,
   source_security_group_name)
   
   notfound = exception.SecurityGroupNotFound

  Is this a problem in my installation? Or is there a logical problem?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218745] Re: no such option 'osapi_compute_ext_list' while running single unit test through nosetests

2014-07-10 Thread Joe Gordon
tox -epy27
nova.tests.api.openstack.compute.contrib.test_disk_config.DiskConfigTestCase.test_create_server_detect_from_image

worked for me, and as we are moving towards not using run_tests, marking
this as opinion, feel free to propose a patch to fix this though.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218745

Title:
  no such option 'osapi_compute_ext_list' while running single unit test
  through nosetests

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  this may because we have not import the src code
  'nova.api.openstack.compute.contrib', so the config is not registered.

  $ nosetests nova.tests.api.openstack.compute.contrib.test_disk_config
  
  ==
  ERROR: 
nova.tests.api.openstack.compute.contrib.test_disk_config.DiskConfigTestCase.test_create_server_detect_from_image

  ...

  Traceback (most recent call last):
    File 
"/home/hzwangpan/nova/nova/tests/api/openstack/compute/contrib/test_disk_config.py",
 line 50, in setUp
  osapi_compute_ext_list=['Disk_config'])
    File "/home/hzwangpan/nova/nova/test.py", line 273, in flags
  CONF.set_override(k, v, group)
    File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line 
1540, in __inner
  result = f(self, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line 
1783, in set_override
  opt_info = self._get_opt_info(name, group)
    File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line 
2029, in _get_opt_info
  raise NoSuchOptError(opt_name, group)
  NoSuchOptError: no such option: osapi_compute_ext_list

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1176730] Re: Use oslo-incubator weights and filters in nova

2014-07-10 Thread Joe Gordon
with the move to gantt, I think this is the wrong direction.

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1176730

Title:
  Use oslo-incubator weights and filters in nova

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Some weight and filters code was copied from nova to oslo-incubator as
  other projects use it such as cinder.

  Now we have code duplicated (and already veering off in different
  directions) between nova and oslo-incubator including:

   * JsonFilter
   * AvailabilityZoneFilter
   * BaseFilter

  This bug proposes to move all of these into only oslo-configurator.

  We should also consider, which other host weights and filters should
  be moved there.

  This is part of blueprint entrypoints-plugins - as the BaseFilter and
  BaseWeight already use stevedore for plugin loading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1176730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180088] Re: ./run_tests.sh -pep8 no longer works

2014-07-10 Thread Joe Gordon
to run pep8 tests, please use 'tox -epep8'

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180088

Title:
  ./run_tests.sh -pep8 no longer works

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  This used to work but now it compalins about a --doctest option that
  is being passed to it:

  $ ./run_tests.sh -pep8
  Running PEP8 and HACKING compliance check...
  Usage: pep8 [options] input ...

  pep8: error: no such option: --doctest

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340340] [NEW] 3 tempest.thirdparty.boto.* tests fail

2014-07-10 Thread Arnaud Legendre
Public bug reported:

tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_get_delete
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_volume_from_snapshot

with the following error:
BotoServerError: BotoServerError: 500 Internal Server Error

OperationalErrorUnknown error 
occurred.req-af61466b-dc4b-4d33-8ca7-8a25543dc246

full log: http://logs.openstack.org/89/89989/9/gate/gate-tempest-dsvm-
postgres-full/bc0e2b8/console.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ec2

** Tags added: ec2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340340

Title:
  3 tempest.thirdparty.boto.* tests fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group
  tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_get_delete
  
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_volume_from_snapshot

  with the following error:
  BotoServerError: BotoServerError: 500 Internal Server Error
  
  OperationalErrorUnknown error 
occurred.req-af61466b-dc4b-4d33-8ca7-8a25543dc246

  full log: http://logs.openstack.org/89/89989/9/gate/gate-tempest-dsvm-
  postgres-full/bc0e2b8/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340366] [NEW] Add datastore and version to trove backup screens

2014-07-10 Thread Andrew Bramley
Public bug reported:

There is additional data available about trove backups for Datastore and
Datastore Version.

Note: There is already a blueprint to add this information to the trove
Database Instance screen.

This bug is to also add this information to the trove backup screens by adding 
it to the 
backups table and the backup details view. 

Note: we should use the same code / formatting as the other blueprint to ensure 
consistency in 
how this looks in the dashboard

** Affects: horizon
 Importance: Undecided
 Assignee: Andrew Bramley (andrlw)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Andrew Bramley (andrlw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340366

Title:
  Add datastore and version to trove backup screens

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is additional data available about trove backups for Datastore
  and Datastore Version.

  Note: There is already a blueprint to add this information to the
  trove Database Instance screen.

  This bug is to also add this information to the trove backup screens by 
adding it to the 
  backups table and the backup details view. 

  Note: we should use the same code / formatting as the other blueprint to 
ensure consistency in 
  how this looks in the dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276530] Re: memorycache#get scans the whole cache for expired items on every fetch

2014-07-10 Thread Joe Gordon
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276530

Title:
  memorycache#get scans the whole cache for expired items on every fetch

Status in Oslo - a Library of Common OpenStack Code:
  Won't Fix

Bug description:
  Every time an item is fetched from the memory cache, the whole cache
  is scanned for expired items:

  
https://github.com/openstack/nova/blob/master/nova/openstack/common/memorycache.py#L63-L67

  This is not the right place to expire items - a large cache can become
  slow.  There should be a more sensible approach to the (difficult)
  problem of cache expiry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo/+bug/1276530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338841] Re: asynchronous connection failed in postgresql jobs

2014-07-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/105854
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=94c654ef37f6a0247a307578f3240f97201a3cba
Submitter: Jenkins
Branch:master

commit 94c654ef37f6a0247a307578f3240f97201a3cba
Author: Matt Riedemann 
Date:   Wed Jul 9 12:38:36 2014 -0700

Set postgresql max_connections=200

Now that we have multiple workers running by default
in various projects (nova/cinder/glance/trove), the
postgresql job is failing intermittently with connection
failures to the database.

The default max_connections for postgresql is 100 so here
we double that.

Note that the default max_connections for mysql used to
be 100 but is now 151, so this change brings the postgresql
configuration more in line with mysql.

Change-Id: I2fcae8184a82e303103795a7bf57c723e27190c9
Closes-Bug: #1338841


** Changed in: devstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338841

Title:
  asynchronous connection failed in postgresql jobs

Status in Cinder:
  Invalid
Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The trace for the failure is here:

  http://logs.openstack.org/57/105257/4/check/check-tempest-dsvm-
  postgres-
  full/f72b818/logs/tempest.txt.gz?level=TRACE#_2014-07-07_23_43_37_250

  This is the console error:

  2014-07-07 23:44:59.590 | tearDownClass 
(tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest)
  2014-07-07 23:44:59.590 | 
-
  2014-07-07 23:44:59.590 | 
  2014-07-07 23:44:59.590 | Captured traceback:
  2014-07-07 23:44:59.590 | ~~~
  2014-07-07 23:44:59.590 | Traceback (most recent call last):
  2014-07-07 23:44:59.590 |   File "tempest/thirdparty/boto/test.py", line 
272, in tearDownClass
  2014-07-07 23:44:59.590 | raise 
exceptions.TearDownException(num=fail_count)
  2014-07-07 23:44:59.590 | TearDownException: 1 cleanUp operation failed

  There isn't much in the n-api logs, just the 400 response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1338841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308727] Re: [OSSA 2014-023] XSS in Horizon Heat template - resource name (CVE-2014-3473)

2014-07-10 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1308727

Title:
  [OSSA 2014-023] XSS in Horizon Heat template - resource name
  (CVE-2014-3473)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The attached yaml will result in a Cross Site Script when viewing the
  resources or events of an Orchestration stack in the following paths:

  /project/stacks/stack/{stack_id}/?tab=stack_details__resources
  /project/stacks/stack/{stack_id}/?tab=stack_details__events

  The A tag's href attribute does not properly URL encode the name of
  the resource string resulting in escaping out of the attribute and
  arbitrary HTML written to the page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1308727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340405] [NEW] Big Switch plugin missing in migration for agents table

2014-07-10 Thread Kevin Benton
Public bug reported:

There is an issue with the db migration script creating the agent bindings 
table for the Big Switch plugin. [1]
This seems to be caused by a recent addition of the Big Switch plugin to the 
agent bindings table but not to the agents table. [2]


1. 
https://groups.google.com/a/openflowhub.org/forum/#!topic/floodlight-dev/k7V-ssEtJKQ
2. 
https://github.com/openstack/neutron/commit/d3be7b040eaa61a4d0ac617026cf5c9132d3831e

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340405

Title:
  Big Switch plugin missing in migration for agents table

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  There is an issue with the db migration script creating the agent bindings 
table for the Big Switch plugin. [1]
  This seems to be caused by a recent addition of the Big Switch plugin to the 
agent bindings table but not to the agents table. [2]

  
  1. 
https://groups.google.com/a/openflowhub.org/forum/#!topic/floodlight-dev/k7V-ssEtJKQ
  2. 
https://github.com/openstack/neutron/commit/d3be7b040eaa61a4d0ac617026cf5c9132d3831e

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340411] [NEW] Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral RBD

2014-07-10 Thread hifieli
Public bug reported:

Greetings,


We can't seem to be able to evacuate instances from a failed compute node using 
shared storage. We are using Ceph Ephemeral RBD as the storage medium.


Steps to reproduce:

nova evacuate --on-shared-storage 6e2081ec-2723-43c7-a730-488bb863674c node-24
or
POST  to http://ip-address:port/v2/tenant_id/servers/server_id/action with 
{"evacuate":{"host":"node-24","onSharedStorage":1}}


Here is what shows up in the logs:


180>Jul 10 20:36:48 node-24 nova-nova.compute.manager AUDIT: Rebuilding instance
<179>Jul 10 20:36:48 node-24 nova-nova.compute.manager ERROR: Setting instance 
vm_state to ERROR
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5554, 
in _error_out_instance_on_exception
yield
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
_("Invalid state of instance files on shared"
InvalidSharedStorage: Invalid state of instance files on shared storage
<179>Jul 10 20:36:49 node-24 nova-oslo.messaging.rpc.dispatcher ERROR: 
Exception during message handling: Invalid state of instance files on shared 
storage
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply
incoming.message))
  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
return func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
payload)
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
return f(self, context, *args, **kw)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 274, in 
decorated_function
pass
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, in 
decorated_function
function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, in 
decorated_function
e, sys.exc_info())
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
_("Invalid state of instance files on shared"
InvalidSharedStorage: Invalid state of instance files on shared storage

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ceph evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340411

Title:
  Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral
  RBD

Status in OpenStack Compute (Nova):
  New

Bug description:
  Greetings,

  
  We can't seem to be able to evacuate instances from a failed compute node 
using shared storage. We are using Ceph Ephemeral RBD as the storage medium.

  
  Steps to reproduce:

  nova evacuate --on-shared-storage 6e2081ec-2723-43c7-a730-488bb863674c node-24
  or
  POST  to http://ip-address:port/v2/tenant_id/servers/server_id/action with 
  {"evacuate":{"host":"node-24","onSharedStorage":1}}

  
  Here is what shows up in the logs:

  
  180>Jul 10 20:36:48 node-24 nova-nova.compute.manager AUDIT: Rebuilding 
instance
  <179>Jul 10 20:36:48 node-24 nova-nova.compute.manager ERROR: Setting 
instance vm_state to ERROR
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5554, 
in _error_out_instance_on_exception
  yield
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage
  <179>Jul 10 20:36:49 node-24 nova-os

[Yahoo-eng-team] [Bug 1308419] Re: requesting empty task list fails when using v2 api with registry

2014-07-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/106012
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=36b502ccdc34a30b4c553d2c398f690b903c469c
Submitter: Jenkins
Branch:master

commit 36b502ccdc34a30b4c553d2c398f690b903c469c
Author: Stuart McLaren 
Date:   Thu Jul 10 10:40:56 2014 +

Add task functions to v2 registry

If local changes are made to run the v2 functional
tests with the v2 registry enabled, and the tests
are then run with:

$ ./run_tests.sh glance.tests.functional.v2

all tests pass except for 'test_task_lifecycle'.

This test fails because the v2 registry does not define the
'task_get_all' or 'task_create' functions.

With these defined the tests pass when run with the v2 registry enabled.

This is a prerequisite to running the v2 functional tests with the
registry enabled.

Change-Id: I588af10105b19087d06f7f13a6f75523595d4a23
Closes-Bug: 1308419


** Changed in: glance
   Status: Invalid => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1308419

Title:
  requesting empty task list fails when using v2 api with registry

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed

Bug description:
  $ ./run_tests.sh --subunit 
glance.tests.functional.v2.test_tasks.TestTasks.test_task_lifecycle
  Running `tools/with_venv.sh python -m glance.openstack.common.lockutils 
python setup.py testr --testr-args='--subunit --concurrency 1  --subunit 
glance.tests.functional.v2.test_tasks.TestTasks.test_task_lifecycle'`
  glance.tests.functional.v2.test_tasks.TestTasks
  test_task_lifecycle   FAIL

  Slowest 1 tests took 12.51 secs:
  glance.tests.functional.v2.test_tasks.TestTasks
  test_task_lifecycle   
12.51

  ==
  FAIL: glance.tests.functional.v2.test_tasks.TestTasks.test_task_lifecycle
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "/home/ubuntu/glance/glance/tests/functional/v2/test_tasks.py", line 
70, in test_task_lifecycle
  self.assertEqual(200, response.status_code)
File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: 200 != 500

  
  Ran 2 tests in 26.697s

  FAILED (failures=1)

  
  2014-04-16 08:56:22,297 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/eventlet/wsgi.py", 
line 389, in handle_one_response
  2014-04-16 08:56:22,297 INFO result = self.application(self.environ, 
start_response)
  2014-04-16 08:56:22,297 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py", 
line 130, in __call__
  2014-04-16 08:56:22,297 INFO resp = self.call_func(req, *args, **self.kwargs)
  2014-04-16 08:56:22,297 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py", 
line 195, in call_func
  2014-04-16 08:56:22,297 INFO return self.func(req, *args, **kwargs)
  2014-04-16 08:56:22,297 INFO File "glance/common/wsgi.py", line 378, in 
__call__
  2014-04-16 08:56:22,297 INFO response = req.get_response(self.application)
  2014-04-16 08:56:22,297 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/request.py", 
line 1320, in send
  2014-04-16 08:56:22,297 INFO application, catch_exc_info=False)
  2014-04-16 08:56:22,298 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/request.py", 
line 1284, in call_application
  2014-04-16 08:56:22,298 INFO app_iter = application(self.environ, 
start_response)
  2014-04-16 08:56:22,298 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py", 
line 130, in __call__
  2014-04-16 08:56:22,298 INFO resp = self.call_func(req, *args, **self.kwargs)
  2014-04-16 08:56:22,298 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/dec.py", 
line 195, in call_func
  2014-04-16 08:56:22,298 INFO return self.func(req, *args, **kwargs)
  2014-04-16 08:56:22,298 INFO File "glance/common/wsgi.py", line 378, in 
__call__
  2014-04-16 08:56:22,298 INFO response = req.get_response(self.application)
  2014-04-16 08:56:22,298 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/webob/request.py", 
line 1320, in send
  2014-04-16 08:56:22,298 INFO application, catch_exc_info=False)
  2014-04-16 08:56:22,298 INFO File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/s

[Yahoo-eng-team] [Bug 1340429] [NEW] saharaclient causes cinder_tests to fail

2014-07-10 Thread Julie Gravel
Public bug reported:

I tried to run cinder_tests individually, using this command: ./run_tests.sh 
openstack_dashboard.test.api_tests.cinder_tests.
Looks like the __init__.py file under openstack_dashboard/api is missing the 
declarations for sahara. Note that I didn't see any error messages when I ran 
the full ./run_tests.sh. I only see the errors when I ran cinder_tests 
individually. 

The tests failed with the following messages:

==
ERROR: test_volume_list 
(openstack_dashboard.test.api_tests.cinder_tests.CinderApiTests)
--
Traceback (most recent call last):
  File "/home/stack/horizon/openstack_dashboard/test/helpers.py", line 266, in 
setUp
self._original_saharaclient = api.sahara.client
AttributeError: 'module' object has no attribute 'sahara'

==
ERROR: test_volume_snapshot_list 
(openstack_dashboard.test.api_tests.cinder_tests.CinderApiTests)
--
Traceback (most recent call last):
  File "/home/stack/horizon/openstack_dashboard/test/helpers.py", line 266, in 
setUp
self._original_saharaclient = api.sahara.client
AttributeError: 'module' object has no attribute 'sahara'

==
ERROR: test_volume_snapshot_list_no_volume_configured 
(openstack_dashboard.test.api_tests.cinder_tests.CinderApiTests)
--
Traceback (most recent call last):
  File "/home/stack/horizon/openstack_dashboard/test/helpers.py", line 266, in 
setUp
self._original_saharaclient = api.sahara.client
AttributeError: 'module' object has no attribute 'sahara'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340429

Title:
  saharaclient causes cinder_tests to fail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I tried to run cinder_tests individually, using this command: ./run_tests.sh 
openstack_dashboard.test.api_tests.cinder_tests.
  Looks like the __init__.py file under openstack_dashboard/api is missing the 
declarations for sahara. Note that I didn't see any error messages when I ran 
the full ./run_tests.sh. I only see the errors when I ran cinder_tests 
individually. 

  The tests failed with the following messages:

  ==
  ERROR: test_volume_list 
(openstack_dashboard.test.api_tests.cinder_tests.CinderApiTests)
  --
  Traceback (most recent call last):
File "/home/stack/horizon/openstack_dashboard/test/helpers.py", line 266, 
in setUp
  self._original_saharaclient = api.sahara.client
  AttributeError: 'module' object has no attribute 'sahara'

  ==
  ERROR: test_volume_snapshot_list 
(openstack_dashboard.test.api_tests.cinder_tests.CinderApiTests)
  --
  Traceback (most recent call last):
File "/home/stack/horizon/openstack_dashboard/test/helpers.py", line 266, 
in setUp
  self._original_saharaclient = api.sahara.client
  AttributeError: 'module' object has no attribute 'sahara'

  ==
  ERROR: test_volume_snapshot_list_no_volume_configured 
(openstack_dashboard.test.api_tests.cinder_tests.CinderApiTests)
  --
  Traceback (most recent call last):
File "/home/stack/horizon/openstack_dashboard/test/helpers.py", line 266, 
in setUp
  self._original_saharaclient = api.sahara.client
  AttributeError: 'module' object has no attribute 'sahara'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340431] [NEW] NSX: network gateway connection doesn't validate vlan id

2014-07-10 Thread Salvatore Orlando
Public bug reported:

when the transport type for a network gateway connection is vlan, the
neutron code does not validate that the segmentation id is between 0 and
4095.

The requests is then sent to NSX where it fails. However a 500 error is
returned to the neutron API user because of the backend failure.

The operation should return a 400 and possibly not go at all to the
backend.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340431

Title:
  NSX: network gateway connection doesn't validate vlan id

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when the transport type for a network gateway connection is vlan, the
  neutron code does not validate that the segmentation id is between 0
  and 4095.

  The requests is then sent to NSX where it fails. However a 500 error
  is returned to the neutron API user because of the backend failure.

  The operation should return a 400 and possibly not go at all to the
  backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1031807] Re: nova.volume.san.HpSanISCSIDriver can't ssh to HP/Lefthand Units

2014-07-10 Thread Dave Gershon
** Changed in: cinder
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1031807

Title:
  nova.volume.san.HpSanISCSIDriver can't ssh to HP/Lefthand Units

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  HP/Lefthand storage requires password-interactive authentication. (The
  Windows command-line tool can generate an encrypted hash of the user-
  password, but even with that there is no provision for saving a public
  key to the SAN.)

  There are several workarounds in various places on the net for faking 
keyboard-interactive with paramiko. Without a workaround, any attempt to create 
or remove a volume results in:
  2012-08-01 10:25:53 TRACE nova.rpc.amqp BadAuthenticationType: Bad 
authentication type (allowed_types=['publickey', 'keyboard-interactive'])

  Full trace:
  2012-08-01 10:25:53 INFO nova.volume.manager 
[req-d03338a9-9115-48a3-8dfc-35cdfcdc15a7 3bf04474f3294b319ace534f930939b0 
1d0e530882654962aed9ae9fd122c178] volume volume-0002: creating
  2012-08-01 10:25:53 DEBUG nova.volume.manager 
[req-d03338a9-9115-48a3-8dfc-35cdfcdc15a7 3bf04474f3294b319ace534f930939b0 
1d0e530882654962aed9ae9fd122c178] volume volume-0002: creating lv of size 
10G from (pid=2799) create_volume 
/usr/lib/python2.7/dist-packages/nova/volume/manager.py:120
  2012-08-01 10:25:53 ERROR nova.rpc.amqp 
[req-d03338a9-9115-48a3-8dfc-35cdfcdc15a7 3bf04474f3294b319ace534f930939b0 
1d0e530882654962aed9ae9fd122c178] Exception during message handling
  2012-08-01 10:25:53 TRACE nova.rpc.amqp Traceback (most recent call last):
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in _process_data
  2012-08-01 10:25:53 TRACE nova.rpc.amqp rval = node_func(context=ctxt, 
**node_args)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/manager.py", line 138, in 
create_volume
  2012-08-01 10:25:53 TRACE nova.rpc.amqp volume_ref['id'], {'status': 
'error'})
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2012-08-01 10:25:53 TRACE nova.rpc.amqp self.gen.next()
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/manager.py", line 122, in 
create_volume
  2012-08-01 10:25:53 TRACE nova.rpc.amqp model_update = 
self.driver.create_volume(volume_ref)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/san.py", line 565, in 
create_volume
  2012-08-01 10:25:53 TRACE nova.rpc.amqp 
self._cliq_run_xml("createVolume", cliq_args)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/san.py", line 450, in 
_cliq_run_xml
  2012-08-01 10:25:53 TRACE nova.rpc.amqp (out, _err) = 
self._cliq_run(verb, cliq_args)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/san.py", line 445, in _cliq_run
  2012-08-01 10:25:53 TRACE nova.rpc.amqp return self._run_ssh(cmd)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/san.py", line 126, in _run_ssh
  2012-08-01 10:25:53 TRACE nova.rpc.amqp ssh = self._connect_to_ssh()
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/san.py", line 103, in 
_connect_to_ssh
  2012-08-01 10:25:53 TRACE nova.rpc.amqp password=FLAGS.san_password)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/paramiko/client.py", line 332, in connect
  2012-08-01 10:25:53 TRACE nova.rpc.amqp self._auth(username, password, 
pkey, key_filenames, allow_agent, look_for_keys)
  2012-08-01 10:25:53 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/paramiko/client.py", line 493, in _auth
  2012-08-01 10:25:53 TRACE nova.rpc.amqp raise saved_exception
  2012-08-01 10:25:53 TRACE nova.rpc.amqp BadAuthenticationType: Bad 
authentication type (allowed_types=['publickey', 'keyboard-interactive'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1031807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309244] Re: if using nova boot --num-instances and neutron instances count are truncated to port quota

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309244

Title:
  if using nova boot --num-instances and neutron instances count are
  truncated to port quota

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Nova:
  Fix Released

Bug description:
  If using nova with neutron and you run the following command:

  
  nova boot --image  cirros-0.3.1-x86_64-uec --flavor 1 --nic 
net-id=41b95a61-c052-4cfa-8361-493e8d7298e3 --nic 
net-id=4658b15d-1a0c-4a13-bd28-b5dd41b5feee --nic 
net-id=70da3b11-531c-471c-89cb-8146183c5470 --flavor 100 vmx --num-instances 30

  and you do not have enough quota it would be nice if nova-api would
  raise an error or perhaps actually try and launch 30 instances.
  Instead you get some number of instances up to your port-quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340473] [NEW] dhcp agent create broken network namespace

2014-07-10 Thread Alex Xu
Public bug reported:

Running dhcp agent, get error as below:

2014-07-10 23:18:41.932 ERROR neutron.agent.dhcp_agent [-] Unable to enable 
dhcp for 72cad723-3ce1-402b-ac4b-746274cbad9d.
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Traceback (most recent 
call last):
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/dhcp_agent.py", line 129, in call_driver
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent getattr(driver, 
action)(**action_kwargs)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 191, in enable
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent interface_name = 
self.device_manager.setup(self.network)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 894, in setup
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
namespace=network.namespace)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 368, in plug
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent namespace2=namespace)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 125, in add_veth
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
self.ensure_namespace(namespace2)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 137, in 
ensure_namespace
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent lo.link.set_up()
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 248, in set_up
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent self._as_root('set', 
self.name, 'up')
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 229, in _as_root
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
kwargs.get('use_root_namespace', False))
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 69, in _as_root
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent namespace)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 80, in _execute
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
root_helper=root_helper)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 76, in execute
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent raise RuntimeError(m)
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent RuntimeError:
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qdhcp-72cad723-3ce1-402b-ac4b-746274cbad9d', 'ip', 'link', 'set', 
'lo', 'up']
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Exit code: 1
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Stdout: ''
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Stderr: 'seting the 
network namespace "qdhcp-72cad723-3ce1-402b-ac4b-746274cbad9d" failed: Invalid 
argument\n'
2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Running dhcp agent, get error as below:
  
  2014-07-10 23:18:41.932 ERROR neutron.agent.dhcp_agent [-] Unable to enable 
dhcp for 72cad723-3ce1-402b-ac4b-746274cbad9d.
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Traceback (most recent 
call last):
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/dhcp_agent.py", line 129, in call_driver
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent getattr(driver, 
action)(**action_kwargs)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 191, in enable
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent interface_name = 
self.device_manager.setup(self.network)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 894, in setup
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
namespace=network.namespace)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 368, in plug
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
namespace2=namespace)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 125, in add_veth
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
self.ensure_namespace(namespace2)
  2014-07-10 23:18:41

[Yahoo-eng-team] [Bug 1267685] Re: boot vm don't support ipv6

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267685

Title:
  boot vm don't support ipv6

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  when boot vm, it can use '--nic ' to set network info, if it want to use ipv6,it hase to use 
port-id which hase a ipv6 address, I think it should can use '--nic 
net-id=net-uuid, fixed-ip=ip-addr' which include ipv4 and ipv6 
address.Currently,nova already prevent that:
  if address is not None and not utils.is_valid_ipv4(address):
 msg = _("Invalid fixed IP address (%s)") % address
  raise exc.HTTPBadRequest(explanation=msg)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262124] Re: Ceilometer cannot poll and publish floatingip samples

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262124

Title:
  Ceilometer cannot poll and publish floatingip samples

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Manuals:
  Incomplete
Status in Python client library for Nova:
  Fix Released

Bug description:
  The ceilometer central agent pull and pubulish floatingip samples or other 
types of samples .but it cannot get valid samples of floatingip.
  The reason is ceilometer floatingip poster call nova API  "list" metod of 
nova.api.openstack.compute.contrib.floating_ips.FloatingIPController, this API 
get floatingips filtered by context.project_id.

  The current context.project_id is the id of tenant "service".So,the
  result is {"floatingips": []}

  the logs of nova-api-os-compute is:

  http://paste.openstack.org/show/55285/

  Here,ceilometer invoke novaclient to list floatingips,and novaclient call 
nova API,then,the nova API will call nova network API or neutron API with:
  client.list_floatingips(tenant_id=project_id)['floatingips']

  Novaclient can not list other tenant's floatingip but only the tenant
  of current context.

  So, I think we should modify the nova API with adding a parameter like
  "all_tenant" which accessed by admin role.

  This should be confirmed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1262124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299517] Re: quota-class-update

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299517

Title:
   quota-class-update

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in Python client library for Nova:
  Fix Released

Bug description:
  Cant update default quota:
  root@blade1-1-live:~# nova --debug quota-class-update --ram -1 default

  
  REQ: curl -i 
'http://XXX.XXX.XXX.XXX:8774/v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default'
 -X PUT -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: 62837311542a42a495442d911cc8b12a" -d '{"quota_class_set": 
{"ram": -1}}'

  New session created for: (http://XXX.XXX.XXX.XXX:8774)
  INFO (connectionpool:258) Starting new HTTP connection (1): XXX.XXX.XXX.XXX
  DEBUG (connectionpool:375) Setting read timeout to 600.0
  DEBUG (connectionpool:415) "PUT 
/v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default HTTP/1.1" 404 
52
  RESP: [404] CaseInsensitiveDict({'date': 'Sat, 29 Mar 2014 17:17:32 GMT', 
'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'})
  RESP BODY: 404 Not Found

  The resource could not be found.


  DEBUG (shell:777) Not found (HTTP 404)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 774, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 710, in 
main
  args.func(self.cs, args)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 
3378, in do_quota_class_update
  _quota_update(cs.quota_classes, args.class_name, args)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 
3164, in _quota_update
  manager.update(identifier, **updates)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/quota_classes.py", 
line 44, in update
  'quota_class_set')
File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 165, in 
_update
  _resp, body = self.api.client.put(url, body=body)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 289, in 
put
  return self._cs_request(url, 'PUT', **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 260, in 
_cs_request
  **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 242, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 236, in 
request
  raise exceptions.from_response(resp, body, url, method)
  NotFound: Not found (HTTP 404)
  ERROR: Not found (HTTP 404)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1299517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307338] Re: Return incorrect message in keypair-show and keypair-delete

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1307338

Title:
  Return incorrect message in keypair-show and keypair-delete

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  Reproduce:
  1.nova keypair-list

  +--+-+
  | Name | Fingerprint |
  +--+-+
  | root_key | 41:f3:fc:23:07:1d:99:cc:fd:e4:7a:a3:20:ba:78:25 |
  +--+-+

  2.nova keypair-show root
  ERROR: The resource could not be found. (HTTP 404) (Request-ID: 
req-542fa1da-0ab0-4624-b662-7d7c908508e2)

  3.nova keypair-delete root
  ERROR: The resource could not be found. (HTTP 404) (Request-ID: 
req-2f8587a3-ee5e-4134-ba5d-a2b3f0968cbc)

  expected:
  1.nova keypair-show root
  ERROR: No keypair with a name or ID of 'root' exists.
  2.nova keypair-delete root
  ERROR: No keypair with a name or ID of 'root' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1307338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295426] Re: "get console output" v3 API should allow -1 as the length

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295426

Title:
  "get console output" v3 API should allow -1 as the length

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  If running "nova console-log" command against v3 API, the command
  fails like the following:

  $ nova --os-compute-api-version 3 console-log vm01
  ERROR: Invalid input for field/attribute length. Value: None. None is not of 
type 'integer', 'string' (HTTP 400) (Request-ID: 
req-b8588c9b-58a7-4e22-a2e9-30c5354ae4f7)
  $

  This is because API schema of the API does not allow null as the length of 
log. The other APIs(quota, etc) considers -1 as an unlimited value.
  So it would be nice that "get console output" API also considers -1 as the
  unlimited length for API consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1295426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280033] Re: Remove dependent module py3kcompat

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280033

Title:
  Remove dependent module py3kcompat

Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for Ceilometer:
  Fix Committed
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  In Progress
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in Python client library for Sahara (ex. Savanna):
  Fix Committed
Status in Trove client binding:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in OpenStack contribution dashboard:
  Fix Released

Bug description:
  Everything in module py3kcompat is ready in six > 1.4.0, we don't need
  this module now . It was removed from oslo-incubator recently, see
  https://review.openstack.org/#/c/71591/.  This make us don't need
  maintain this module any more, use six directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1280033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323541] Re: The swap measurement unit is not specified in the CLI table

2014-07-10 Thread Michael Still
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323541

Title:
  The swap measurement unit is not specified in the CLI table

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  Description of problem:
  The measurement unit of the swap memory in the flavor is MB, unlike all the 
other disk's units, which are GB. 
  This might cause a confusion in the CLI when the unit is not specified:
   
+--+---+---+--+---+--+---+-+---+
  | ID   | Name  | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--+---+---+--+---+--+---+-+---+
  | 13eec680-fa84-4c8a-98ed-51ad564bb0c6 | m1.tiny   | 512   | 1| 0 
| 512  | 1 | 1.0 | True  |
  | 3| m1.medium | 4096  | 40   | 0 
|  | 2 | 1.0 | True  |
  | 4| m1.large  | 8192  | 80   | 0 
|  | 4 | 1.0 | True  |
  | 41f44ff1-b09c-4d14-948d-ead7cf2177a9 | m1.small  | 2048  | 20   | 40
|  | 1 | 1.0 | True  |
  | 5| m1.xlarge | 16384 | 160  | 0 
|  | 8 | 1.0 | True  |
  
+--+---+---+--+---+--+---+-+---+

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2014.1-2.el7ost.noarch
  openstack-nova-cert-2014.1-2.el7ost.noarch
  openstack-nova-novncproxy-2014.1-2.el7ost.noarch
  python-novaclient-2.17.0-1.el7ost.noarch
  python-nova-2014.1-2.el7ost.noarch
  openstack-nova-api-2014.1-2.el7ost.noarch
  openstack-nova-network-2014.1-2.el7ost.noarch
  openstack-nova-console-2014.1-2.el7ost.noarch
  openstack-nova-scheduler-2014.1-2.el7ost.noarch
  openstack-nova-conductor-2014.1-2.el7ost.noarch
  openstack-nova-common-2014.1-2.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. add swap to a flavor
  2. run the CLI command:
  # nova flavor-list

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292573] Re: nova cannot retrieve a fixed ip info

2014-07-10 Thread Thang Pham
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292573

Title:
  nova cannot retrieve a fixed ip info

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When using 'nova fixed-ip-get' command to retrieve any fixed ip, it
  will return a not found error(404).

  this result of the empty 'fixed_ips' db table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1292573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340514] [NEW] jenkins gate job failed when devstack installing qemu-kvm

2014-07-10 Thread stanzgy
*** This bug is a duplicate of bug 1286818 ***
https://bugs.launchpad.net/bugs/1286818

Public bug reported:


2014-07-11 03:35:39.934 | + apt_get install qemu-kvm
2014-07-11 03:35:39.939 | + sudo DEBIAN_FRONTEND=noninteractive http_proxy= 
https_proxy= no_proxy= apt-get --option Dpkg::Options::=--force-confold 
--assume-yes install qemu-kvm
2014-07-11 03:35:39.963 | Reading package lists...
2014-07-11 03:35:40.224 | Building dependency tree...
2014-07-11 03:35:40.225 | Reading state information...
2014-07-11 03:35:40.430 | The following packages were automatically installed 
and are no longer required:
2014-07-11 03:35:40.430 |   python-colorama python-distlib python-html5lib
2014-07-11 03:35:40.430 | Use 'apt-get autoremove' to remove them.
2014-07-11 03:35:40.448 | The following extra packages will be installed:
2014-07-11 03:35:40.448 |   cpu-checker ipxe-qemu libbluetooth3 libbrlapi0.6 
libcaca0 libfdt1
2014-07-11 03:35:40.448 |   libsdl1.2debian libseccomp2 libspice-server1 
libusbredirparser1 libxen-4.4
2014-07-11 03:35:40.448 |   libxenstore3.0 libyajl2 msr-tools qemu-keymaps 
qemu-system-common
2014-07-11 03:35:40.448 |   qemu-system-x86 seabios
2014-07-11 03:35:40.449 | Suggested packages:
2014-07-11 03:35:40.449 |   samba vde2 sgabios
2014-07-11 03:35:40.451 | The following NEW packages will be installed:
2014-07-11 03:35:40.451 |   cpu-checker ipxe-qemu libbluetooth3 libbrlapi0.6 
libcaca0 libfdt1
2014-07-11 03:35:40.451 |   libsdl1.2debian libseccomp2 libspice-server1 
libusbredirparser1 libxen-4.4
2014-07-11 03:35:40.451 |   libxenstore3.0 libyajl2 msr-tools qemu-keymaps 
qemu-kvm qemu-system-common
2014-07-11 03:35:40.451 |   qemu-system-x86 seabios
2014-07-11 03:35:40.514 | 0 upgraded, 19 newly installed, 0 to remove and 0 not 
upgraded.
2014-07-11 03:35:40.514 | Need to get 291 kB/3985 kB of archives.
2014-07-11 03:35:40.514 | After this operation, 20.4 MB of additional disk 
space will be used.
2014-07-11 03:35:40.514 | Err http://mirror.rackspace.com/ubuntu/ 
trusty-security/main libxenstore3.0 amd64 4.4.0-0ubuntu5.1
2014-07-11 03:35:40.514 |   404  Not Found
2014-07-11 03:35:40.518 | Err http://mirror.rackspace.com/ubuntu/ 
trusty-security/main libxen-4.4 amd64 4.4.0-0ubuntu5.1
2014-07-11 03:35:40.518 |   404  Not Found
2014-07-11 03:35:40.522 | E: Failed to fetch 
http://mirror.rackspace.com/ubuntu/pool/main/x/xen/libxenstore3.0_4.4.0-0ubuntu5.1_amd64.deb
  404  Not Found
2014-07-11 03:35:40.522 | 
2014-07-11 03:35:40.522 | E: Failed to fetch 
http://mirror.rackspace.com/ubuntu/pool/main/x/xen/libxen-4.4_4.4.0-0ubuntu5.1_amd64.deb
  404  Not Found
2014-07-11 03:35:40.522 | 
2014-07-11 03:35:40.522 | E: Unable to fetch some archives, maybe run apt-get 
update or try with --fix-missing?
2014-07-11 03:35:40.524 | + exit_trap
2014-07-11 03:35:40.524 | + local r=100
2014-07-11 03:35:40.524 | ++ jobs -p
2014-07-11 03:35:40.525 | + jobs=
2014-07-11 03:35:40.525 | + [[ -n '' ]]
2014-07-11 03:35:40.525 | + kill_spinner
2014-07-11 03:35:40.525 | + '[' '!' -z '' ']'
2014-07-11 03:35:40.525 | + [[ 100 -ne 0 ]]
2014-07-11 03:35:40.525 | + echo 'Error on exit'
2014-07-11 03:35:40.525 | Error on exit
2014-07-11 03:35:40.525 | + ./tools/worlddump.py -d /opt/stack/new
2014-07-11 03:35:40.551 | World dumping... see 
/opt/stack/new/worlddump-2014-07-11-033540.txt for details
2014-07-11 03:35:40.570 | + exit 100

 
Jenkins gate jobs failed since devstack failed to install qemu-kvm. Some libs 
in ubuntu deb source mirrors are missing.
I found several commits stuck by this problem.
https://review.openstack.org/#/c/104952/
https://review.openstack.org/#/c/105626/
https://review.openstack.org/#/c/103023/

** Affects: openstack-ci
 Importance: Undecided
 Status: New

** Project changed: nova => openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340514

Title:
  jenkins gate job failed when devstack installing qemu-kvm

Status in OpenStack Core Infrastructure:
  New

Bug description:
  
  2014-07-11 03:35:39.934 | + apt_get install qemu-kvm
  2014-07-11 03:35:39.939 | + sudo DEBIAN_FRONTEND=noninteractive http_proxy= 
https_proxy= no_proxy= apt-get --option Dpkg::Options::=--force-confold 
--assume-yes install qemu-kvm
  2014-07-11 03:35:39.963 | Reading package lists...
  2014-07-11 03:35:40.224 | Building dependency tree...
  2014-07-11 03:35:40.225 | Reading state information...
  2014-07-11 03:35:40.430 | The following packages were automatically installed 
and are no longer required:
  2014-07-11 03:35:40.430 |   python-colorama python-distlib python-html5lib
  2014-07-11 03:35:40.430 | Use 'apt-get autoremove' to remove them.
  2014-07-11 03:35:40.448 | The following extra packages will be installed:
  2014-07-11 03:35:40.448 |   cpu-checker ipxe-qemu libbluetooth3 libbrlapi0.6 
libcaca0 libfdt1
  2014-07-11 03:35:40.448 |   libsdl1.2debian libseccomp2 li

[Yahoo-eng-team] [Bug 1340518] [NEW] can not use virsh console on serial terminal

2014-07-10 Thread Yukihiro KAWADA
Public bug reported:

In KVM case:
We can not use virsh console on serial terminal.
So, can not login to each vm using 'virsh console' command on terminal.
Because  vm's config xml file is not support it now.
This feature is so important for us.

Please apply this patch:
CONF.libvirt.virsh_console_serial=False (default. is same now)

if you using virsh console then set
CONF.libvirt.virsh_console_serial=True


diff --git a/nova/virt/libvirt/config.py b/nova/virt/libvirt/config.py
index 8eaf658..090e17b 100644
--- a/nova/virt/libvirt/config.py
+++ b/nova/virt/libvirt/config.py
@@ -1053,6 +1053,9 @@ class 
LibvirtConfigGuestCharBase(LibvirtConfigGuestDevice):
 dev = super(LibvirtConfigGuestCharBase, self).format_dom()

 dev.set("type", self.type)
+if self.root_name == "console":
+dev.set("tty", self.source_path)
+
 if self.type == "file":
 dev.append(etree.Element("source", path=self.source_path))
 elif self.type == "unix":
diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
index 9bd75fa..de2735e 100644
--- a/nova/virt/libvirt/driver.py
+++ b/nova/virt/libvirt/driver.py
@@ -213,6 +213,9 @@ libvirt_opts = [
 help='A path to a device that will be used as source of '
  'entropy on the host. Permitted options are: '
  '/dev/random or /dev/hwrng'),
+cfg.BoolOpt('virsh_console_serial',
+default=False,
+help='Use virsh console on serial terminal'),
 ]

 CONF = cfg.CONF
@@ -3278,14 +3281,29 @@ class LibvirtDriver(driver.ComputeDriver):
 # client app is connected. Thus we can't get away
 # with a single type=pty console. Instead we have
 # to configure two separate consoles.
-consolelog = vconfig.LibvirtConfigGuestSerial()
-consolelog.type = "file"
-consolelog.source_path = self._get_console_log_path(instance)
-guest.add_device(consolelog)

-consolepty = vconfig.LibvirtConfigGuestSerial()
-consolepty.type = "pty"
-guest.add_device(consolepty)
+if CONF.libvirt.virsh_console_serial:  # Y.Kawada
+consolepty = vconfig.LibvirtConfigGuestSerial()
+consolepty.type = "pty"
+consolepty.target_port = "0"
+consolepty.source_path = "/dev/pts/11"
+consolepty.alias_name = "serial0"
+guest.add_device(consolepty)
+
+consolepty = vconfig.LibvirtConfigGuestConsole()
+consolepty.type = "pty"
+consolepty.target_port = "0"
+consolepty.source_path = "/dev/pts/11"
+consolepty.alias_name = "serial0"
+else:
+consolelog = vconfig.LibvirtConfigGuestSerial()
+consolelog.type = "file"
+consolelog.source_path = self._get_console_log_path(instance)
+guest.add_device(consolelog)
+
+consolepty = vconfig.LibvirtConfigGuestSerial()
+consolepty.type = "pty"
+guest.add_device(consolepty)
 else:
 consolepty = vconfig.LibvirtConfigGuestConsole()
 consolepty.type = "pty"

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: "virsh_console.patch"
   
https://bugs.launchpad.net/bugs/1340518/+attachment/4150064/+files/virsh_console.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340518

Title:
  can not use virsh console on serial terminal

Status in OpenStack Compute (Nova):
  New

Bug description:
  In KVM case:
  We can not use virsh console on serial terminal.
  So, can not login to each vm using 'virsh console' command on terminal.
  Because  vm's config xml file is not support it now.
  This feature is so important for us.

  Please apply this patch:
  CONF.libvirt.virsh_console_serial=False (default. is same now)

  if you using virsh console then set
  CONF.libvirt.virsh_console_serial=True

  
  diff --git a/nova/virt/libvirt/config.py b/nova/virt/libvirt/config.py
  index 8eaf658..090e17b 100644
  --- a/nova/virt/libvirt/config.py
  +++ b/nova/virt/libvirt/config.py
  @@ -1053,6 +1053,9 @@ class 
LibvirtConfigGuestCharBase(LibvirtConfigGuestDevice):
   dev = super(LibvirtConfigGuestCharBase, self).format_dom()

   dev.set("type", self.type)
  +if self.root_name == "console":
  +dev.set("tty", self.source_path)
  +
   if self.type == "file":
   dev.append(etree.Element("source", path=self.source_path))
   elif self.type == "unix":
  diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
  index 9bd75fa..de2735e 100644
  --- a/nova/virt/libvirt/driver.py
  +++ b/nova/virt/libvirt/dri

[Yahoo-eng-team] [Bug 1340536] [NEW] NSX: Remove unneed call to _ensure_default_security_group

2014-07-10 Thread Aaron Rosen
Public bug reported:

This patch removes an unneeded call to _ensure_default_security_group
which does not need to be called in create_security_group_rule_bulk()
as one would need to call get_security_groups() to look up the uuid
of the default security group which would create it for us.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340536

Title:
  NSX: Remove unneed call to _ensure_default_security_group

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This patch removes an unneeded call to _ensure_default_security_group
  which does not need to be called in create_security_group_rule_bulk()
  as one would need to call get_security_groups() to look up the uuid
  of the default security group which would create it for us.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp