[Yahoo-eng-team] [Bug 1403815] Re: User can access password other user through login form password autocomplete

2014-12-18 Thread Timur Sufiev
@Lin, either fixing this new window or reverting reveal password feature
seems fine to me.

** Changed in: horizon
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1403815

Title:
  User can access password other user through login form password
  autocomplete

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Browser - Firefox 34.0.

  1) Clear autocomplete and passwords history for the host.
  2) Login with login form. Choose not to save password.
  3) Logout. Type username on login form.
  4) Click "show password"  eye icon (span with "fa-eye-slash" class).
  5) Double click in a password input.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404093] [NEW] Use of *OpportunisticTestCase causes functional tests to skip on db error

2014-12-18 Thread Henry Gessau
Public bug reported:

Tests using oslo.db.sqlalchemy.test_base.DbFixture will skip if the
database cannot be provisioned. In the neutron functional job we do not
want to skip tests. The tests should fail if the environment is not set
up correctly for the tests.

After https://review.openstack.org/126175 is merged we should see to it
that the migrations tests do not skip.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Tests using oslo.db.sqlalchemy.test_base.DbFixture will skip if the
  database cannot be provisioned. In the neutron functional job we do not
  want to skip tests. The tests should fail if the environment is not set
  up correctly for the tests.
+ 
+ After https://review.openstack.org/126175 is merged we should see to it
+ that the migrations tests do not skip.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404093

Title:
  Use of *OpportunisticTestCase causes functional tests to skip on db
  error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Tests using oslo.db.sqlalchemy.test_base.DbFixture will skip if the
  database cannot be provisioned. In the neutron functional job we do
  not want to skip tests. The tests should fail if the environment is
  not set up correctly for the tests.

  After https://review.openstack.org/126175 is merged we should see to
  it that the migrations tests do not skip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404085] [NEW] l3 agent failed to spawn radvd due to no filter matched

2014-12-18 Thread Jerry Zhao
Public bug reported:

i have an openstack deployment by tripleo with trunk code as of last
week or so. i created an ipv6 subnet with slaac mode for ra and address.
when i launched a ubuntu trusty vm, it couldn't get ipv6 address.

the l3 agent log said:
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: Stderr: 
'/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C 
/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p 
/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd 
(no filter matched)\n'
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py",
 line 341, in call
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent return 
func(*args, **kwargs)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
 line 902, in process_router
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent self.root_helper)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py",
 line 111, in enable_ipv6_ra
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
_spawn_radvd(router_id, radvd_conf, router_ns, root_helper)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py",
 line 95, in _spawn_radvd
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
radvd.enable(callback, True)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py",
 line 77, in enable
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
ip_wrapper.netns.execute(cmd, addl_env=self.cmd_addl_env)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 554, in execute
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 82, in execute
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError:
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', 'radvd', '-C', 
'/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf', '-p', 
'/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd']
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Exit code: 99
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stdout: ''
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stderr: 
'/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C 
/var/run/neutron

[Yahoo-eng-team] [Bug 1404082] [NEW] rbd_store_chunk_size Sets to Bytes instead of kB

2014-12-18 Thread Tyler Wilson
Public bug reported:

In Juno-Release I am experiancing rbd_store_chunk_size setting the image
object size to bytes rather than kB resulting in much much more objects
being created

Icehouse Install
root@node-1:~# rbd info images/62e06da9-2d39-4c7f-a2d8-d869953b9996@snap
rbd image '62e06da9-2d39-4c7f-a2d8-d869953b9996':
size 4096 MB in 512 objects
order 23 (8192 kB objects)
block_name_prefix: rbd_data.6a282e18d096
format: 2
features: layering
protected: True

Juno Install
root@hvm003 ~]# rbd info images/136dd921-f6a2-432f-b4d6-e9902f71baa6@snap
rbd image '136dd921-f6a2-432f-b4d6-e9902f71baa6':
size 4096 MB in 524288 objects
order 13 (8192 bytes objects)
block_name_prefix: rbd_data.10d73ac85fb6
format: 2
features: layering
protected: True

Either the documentation needs updating or rbd_store_chunk_size needs to
be changed to set the object size in kB. Currently the workaround to get
back to 8MB object size is 'rbd_store_chunk_size = 8192'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1404082

Title:
  rbd_store_chunk_size Sets to Bytes instead of kB

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In Juno-Release I am experiancing rbd_store_chunk_size setting the
  image object size to bytes rather than kB resulting in much much more
  objects being created

  Icehouse Install
  root@node-1:~# rbd info images/62e06da9-2d39-4c7f-a2d8-d869953b9996@snap
  rbd image '62e06da9-2d39-4c7f-a2d8-d869953b9996':
  size 4096 MB in 512 objects
  order 23 (8192 kB objects)
  block_name_prefix: rbd_data.6a282e18d096
  format: 2
  features: layering
  protected: True

  Juno Install
  root@hvm003 ~]# rbd info images/136dd921-f6a2-432f-b4d6-e9902f71baa6@snap
  rbd image '136dd921-f6a2-432f-b4d6-e9902f71baa6':
  size 4096 MB in 524288 objects
  order 13 (8192 bytes objects)
  block_name_prefix: rbd_data.10d73ac85fb6
  format: 2
  features: layering
  protected: True

  Either the documentation needs updating or rbd_store_chunk_size needs
  to be changed to set the object size in kB. Currently the workaround
  to get back to 8MB object size is 'rbd_store_chunk_size = 8192'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1404082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404037] Re: SimpleReadOnlySaharaClientTest.test_sahara_help fails in gate

2014-12-18 Thread John Griffith
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404037

Title:
  SimpleReadOnlySaharaClientTest.test_sahara_help fails in gate

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  New

Bug description:
  Fails on various gate jobs, example patch here:
  https://review.openstack.org/#/c/141931/  at Dec 18, 22:34 UTC

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404063] [NEW] unshelving an instance in SHELVED_OFFLOADED to a new host fails - unable to reach metadata server

2014-12-18 Thread Joe Gordon
Public bug reported:

When testing out multi node devstack running tempest-full
tempest.scenario.test_shelve_instance.TestShelveInstance fails

http://logs.openstack.org/04/136504/14/experimental/check-tempest-dsvm-
aiopcpu/68b73b8/logs/testr_results.html.gz

The instance successfully goes to SHELVED_OFFLOADED, and when its
unshelved it goes back to active on the second node, but the instance is
unable to reach the metadata server (as confirmed by no metadata calls
in the  right time window).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404063

Title:
  unshelving an instance in SHELVED_OFFLOADED to a new host fails -
  unable to reach metadata server

Status in OpenStack Compute (Nova):
  New

Bug description:
  When testing out multi node devstack running tempest-full
  tempest.scenario.test_shelve_instance.TestShelveInstance fails

  http://logs.openstack.org/04/136504/14/experimental/check-tempest-
  dsvm-aiopcpu/68b73b8/logs/testr_results.html.gz

  The instance successfully goes to SHELVED_OFFLOADED, and when its
  unshelved it goes back to active on the second node, but the instance
  is unable to reach the metadata server (as confirmed by no metadata
  calls in the  right time window).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404060] [NEW] SSH keys not updated correctly when sshd_config "AuthorizedKeysFile" contains multiple values

2014-12-18 Thread Alex Gottschalk
Public bug reported:

I have overridden the AuthorizedKeysFile stanza in my site's
sshd_config, as follows:

AuthorizedKeysFile  %h/.ssh/authorized_keys
/etc/ssh/authorized_keys/%u

This allows two locations for authorized keys, which is useful for us
because reasons.

It looks like cloud-init is incorrectly parsing this line to determine
where to drop user keys, as I'm ending up with the following file:

"/home/ubuntu/.ssh/authorized_keys /etc/ssh/authorized_keys/ubuntu"
(note that the space is part of the directory name under .ssh)

I think cloud-init should probably treat whitespace as a field separator
here, and append keys to all AuthorizedKeysFile entries listed.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Summary changed:

- authorized_keys not updated when sshd_config "AuthorizedKeysFile" contains 
multiple values
+ SSH keys not updated correctly when sshd_config "AuthorizedKeysFile" contains 
multiple values

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1404060

Title:
  SSH keys not updated correctly when sshd_config "AuthorizedKeysFile"
  contains multiple values

Status in Init scripts for use on cloud images:
  New

Bug description:
  I have overridden the AuthorizedKeysFile stanza in my site's
  sshd_config, as follows:

  AuthorizedKeysFile  %h/.ssh/authorized_keys
  /etc/ssh/authorized_keys/%u

  This allows two locations for authorized keys, which is useful for us
  because reasons.

  It looks like cloud-init is incorrectly parsing this line to determine
  where to drop user keys, as I'm ending up with the following file:

  "/home/ubuntu/.ssh/authorized_keys /etc/ssh/authorized_keys/ubuntu"
  (note that the space is part of the directory name under .ssh)

  I think cloud-init should probably treat whitespace as a field
  separator here, and append keys to all AuthorizedKeysFile entries
  listed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1404060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404046] [NEW] "Session is timed out" on login with timed out session

2014-12-18 Thread Andrew Lazarev
Public bug reported:

Steps to reproduce
1. Login to horizon (I'm using macos chrome)
2. Leave it for a day or so (not sure about exact time), close browser
3. After a day open browser and open horizon (I usually start typing URL and 
browser completes it). Login screen will appear.
4. Type login/password to the form.

Expected result: Session is logged in to the horizon
Observed result: "Session is timed out" message appears and login screen is 
shown.

Second login attempt works fine.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404046

Title:
  "Session is timed out" on login with timed out session

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce
  1. Login to horizon (I'm using macos chrome)
  2. Leave it for a day or so (not sure about exact time), close browser
  3. After a day open browser and open horizon (I usually start typing URL and 
browser completes it). Login screen will appear.
  4. Type login/password to the form.

  Expected result: Session is logged in to the horizon
  Observed result: "Session is timed out" message appears and login screen is 
shown.

  Second login attempt works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402502] Re: Resource usage will not be updated when suspending instance

2014-12-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/142302
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=ea465673461d99f44491afb7f29852acd2a637e1
Submitter: Jenkins
Branch:master

commit ea465673461d99f44491afb7f29852acd2a637e1
Author: liyingjun 
Date:   Wed Dec 17 09:52:22 2014 +0800

Suspending instance will not update resource usage

As reported in the bug, "memory and vCPUs NOT become available to
create other instances when suspending instance". So update the
description for ' Suspend and resume an instance'.

Change-Id: I4c11c73fe9ba42e77c86cefe4360ce2496e810e1
Closes-bug: #1402502


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402502

Title:
  Resource usage will not be updated when suspending instance

Status in OpenStack Compute (Nova):
  Opinion
Status in OpenStack Manuals:
  Fix Released

Bug description:
  Suspending instance will not update resource usage.

  Instance suspending should move all contents in the ram to hard disk.
  Then vcpu and used memory should be decreased and hard disk useage should be 
increased.
  However it didn't happen.

  This will lead to trouble in the following scenario:
  When the memory of all compute nodes are exhaust, to create a new instance, 
it's useless to suspend some alive instances, but have to delete them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404032] [NEW] admin volumes page fails when attached instance is missing

2014-12-18 Thread Eric Peterson
Public bug reported:

We are having problems where the admin volumes page is bombing out /
error / http 500.  This seems to occur due to an issue in

openstack_dashboard/dashboards/project/volumes/volumes/tables.py

in the function def get_attachment_name(request, attachment):

The problem seems to be that a volume still thinks it is attached to an
instance that is missing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404032

Title:
  admin volumes page fails when attached instance is missing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We are having problems where the admin volumes page is bombing out /
  error / http 500.  This seems to occur due to an issue in

  openstack_dashboard/dashboards/project/volumes/volumes/tables.py

  in the function def get_attachment_name(request, attachment):

  The problem seems to be that a volume still thinks it is attached to
  an instance that is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404016] [NEW] iptables_manager is spamming neutron-full gate job log

2014-12-18 Thread Brian Haley
Public bug reported:

In this change:

commit 6eee93a98c67a5faf1d1243e0f8592c48d13bd6a
Author: Elena Ezhova 
Date:   Fri Oct 31 19:37:46 2014 +0300

Remove duplicate ensure_remove_chain method in iptables_manager

Change-Id: I168eda2fa430446786d4106d6807207f4facbfc3
Closes-Bug: #1388162

ensure_remove_chain() was removed since it seemed identical to
remove_chain().  Unfortunately the neutron-full gate job is now spewing
lots of warnings:

2014-12-18 19:43:37.909 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
sg-chain which does not exist
2014-12-18 19:43:37.909 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
sg-chain which does not exist
2014-12-18 19:45:51.950 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s71604f53-6 which does not exist
2014-12-18 19:45:57.792 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s71604f53-6 which does not exist
2014-12-18 19:46:12.923 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s153151ab-c which does not exist
2014-12-18 19:46:12.923 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s71604f53-6 which does not exist
...

For now we should revert this until we can track down what is causing
the warning.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404016

Title:
  iptables_manager is spamming neutron-full gate job log

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In this change:

  commit 6eee93a98c67a5faf1d1243e0f8592c48d13bd6a
  Author: Elena Ezhova 
  Date:   Fri Oct 31 19:37:46 2014 +0300

  Remove duplicate ensure_remove_chain method in iptables_manager
  
  Change-Id: I168eda2fa430446786d4106d6807207f4facbfc3
  Closes-Bug: #1388162

  ensure_remove_chain() was removed since it seemed identical to
  remove_chain().  Unfortunately the neutron-full gate job is now
  spewing lots of warnings:

  2014-12-18 19:43:37.909 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
sg-chain which does not exist
  2014-12-18 19:43:37.909 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
sg-chain which does not exist
  2014-12-18 19:45:51.950 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s71604f53-6 which does not exist
  2014-12-18 19:45:57.792 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s71604f53-6 which does not exist
  2014-12-18 19:46:12.923 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s153151ab-c which does not exist
  2014-12-18 19:46:12.923 2304 WARNING neutron.agent.linux.iptables_manager 
[req-a41864b2-9560-416f-86cb-d92d6748961a None] Attempted to remove chain 
s71604f53-6 which does not exist
  ...

  For now we should revert this until we can track down what is causing
  the warning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349978] Re: hard reboot doesn't re-create instance folder

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349978

Title:
  hard reboot doesn't re-create instance folder

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I'm running stateless Nova compute servers (booted with ram_fs mounted
  on /). If I power-cycle a server it forgets all its instances but they
  could be recovered by hard-rebooting, except hard-rebooting a libvirt
  instance assumes the instance's folder already exists (ie.
  /var/lib/nova/instances/e4e4a4a3-1036-4734-9c0b-3bb34c88b8b6/). When
  nova tries to re-create the XML file it causes this stack trace:

  2014-07-29 16:02:41.250 2922 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: [Errno 2] No such file or directory: 
'/var/lib/nova/instances/e4e4a4a3-1036-4734-9c0b-3bb34c88b8b6/libvirt.xml'
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 274, in 
decorated_function
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher pass
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, in 
decorated_function
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, in 
decorated_function
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, in 
decorated_function
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, in 
decorated_function
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2649, in 
reboot_instance
  2014-07-29 16:02:41.250 2922 TRACE oslo.messaging.rpc.dispatcher 
self._set_instance_obj_error_state(co

[Yahoo-eng-team] [Bug 1367060] Re: nova network-create allows invalid fixed-ip creation

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367060

Title:
  nova network-create allows invalid fixed-ip creation

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Creating a network with 'nova network-create' allows the creation of
  fixed-ips that fall outside the fixed-range-v4, resulting in invalid
  fixed IPs.

  To recreate:
  Create a network with network-create that contains a fixed-cidr that falls 
outside the fixed-range-v4.

  Actual outcome:
  If the user runs the following command
  nova network-create vmnet --fixed-range-v4 10.1.0.0/24 --fixed-cidr 
10.20.0.0/16 --bridge br-100

  This command succeeds, and creates invalid fixed IPs which can be
  retrieved with 'nova fixed-ip-get', for example:

  nova fixed-ip-get 10.20.0.1

  +---+-+--+--+
  | address   | cidr| hostname | host |
  +---+-+--+--+
  | 10.20.0.1 | 10.1.0.0/24 | -| -|
  +---+-+--+--+

  This address falls outside the cidr, so is invalid.

  Desired outcome:
  Nova network-create should verify that the fixed-cidr is a subset of 
fixed-range-v4, if the fixed-cidr falls outside of the fixed-range-v4 the 
command should fail with an error, such as "ERROR: fixed-cidr must be a subset 
of fixed-range-v4".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305897] Re: Hyper-V driver failing with dynamic memory due to virtual NUMA

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305897

Title:
  Hyper-V driver failing with dynamic memory due to virtual NUMA

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Starting with Windows Server 2012, Hyper-V provides the Virtual NUMA
  functionality. This option is enabled by default in the VMs depending
  on the underlying hardware.

  However, it's not compatible with dynamic memory. The Hyper-V driver
  is not aware of this constraint and it's not possible to boot new VMs
  if the nova.conf parameter 'dynamic_memory_ratio' > 1.

  The error in the logs looks like the following:
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops HyperVException: 
WMI job failed with status 10. Error details: Failed to modify device 'Memory'.
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the same virtual machine. - 
'instance-0001c90c' failed to modify device 'Memory'. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA)
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the virtual machine 'instance-0001c90c' 
because the features are mutually exclusive. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA) - Error code: 32773

  In order to solve this problem, it's required to change the field
  'VirtualNumaEnabled' in 'Msvm_VirtualSystemSettingData' (option
  available only in v2 namespace) while creating the VM when dynamic
  memory is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367349] Re: ironic: Not listing all nodes registered in Ironic due pagination

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367349

Title:
  ironic: Not listing all nodes registered in Ironic due pagination

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Ironic API supports pagination and limit the number of items returned
  by the API based on a config option called ''max_limit", by default a
  max of 1000 items is returned per request [1].

  The Ironic client library by default respect that limit, so when the
  Nova Ironic driver list the nodes for reasons like verifying how many
  resources  we have available etc... We can hit that limit and the
  wrong information will be passed to nova.

  Luckly, the ironic client supports passing a limit=0 flag when listing
  resources as an indicator to the lib to continue pagination until
  there're no more resources to be returned [2]. We need to update the
  calls in the Nova Ironic driver to make sure we get all items from the
  API when needed.

   [1] 
https://github.com/openstack/ironic/blob/master/ironic/api/__init__.py#L26-L29
   [2] 
https://github.com/openstack/python-ironicclient/blob/master/ironicclient/v1/node.py#L52

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343080] Re: alternate link type in GET /images incorrectly includes the project/tenant in the URI

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343080

Title:
  alternate link type in GET /images incorrectly includes the
  project/tenant in the URI

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Clearly nobody really uses the "application/vnd.openstack.image" links
  in the returned results from GET /v2/{tenant}/images REST API call,
  since the link URI returned in it is wrong.

  Glance URIs do *not* contain a project or tenant in the URI structure
  like Nova's REST API URIs do, but _get_alternate_link() method of the
  image ViewBuilder tacks it on improperly:

  def _get_alternate_link(self, request, identifier):
  """Create an alternate link for a specific image id."""
  glance_url = glance.generate_glance_url()
  glance_url = self._update_glance_link_prefix(glance_url)
  return '/'.join([glance_url,
   request.environ["nova.context"].project_id,
   self._collection_name,
   str(identifier)])

  It's my suspicion that nobody actually uses these alternate links
  anyway, but the fix is simple: just remove the
  request.environ['nova.context'].project_id in the URL join above.

  Note that, yet again, the image service stubs and fakes in the API
  unit testing masked this problem. In cleaning up the unit tests to get
  rid of the stubbed out image service code, I uncovered this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279172] Re: Unicode encoding error exists in extended Nova API, when the data contain unicode

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279172

Title:
  Unicode encoding error exists in extended Nova API, when the data
  contain unicode

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  We have developed an extended Nova API, the API query disks at first, then 
add a disk to an instance.
  After querying, if disk has non-english disk name, unicode will be converted 
to str in nova/api/openstack/wsgi.py line 451 
  "node = doc.createTextNode(str(data))", then unicode encoding error exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276203] Re: Period task interval config values need to be consistent

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276203

Title:
  Period task interval config values need to be consistent

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently we have a mix of “==0” and “<=0” being used inside periodic
  tasks to decide to skip the task altogether.   We also have the
  “spacing=” option in the periodic_task decorator to determine how
  often to call the task, but in this case:  ==0 means “call at default
  interval” and <0 means “never call”. It would be nice to make
  these consistent so that all tasks can use the spacing option rather
  than keep their own check on when (and if)  they  need to run.

  However in order to do this cleanly and not break anyone that is currently 
using “0 “ to mean “don’t run” we need to:
  - Change the default values that are currently 0 to -1
  - Log a deprecation warning for the use “*_interval=0”

  And then leave this in place until Juno before making the change

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282842] Re: default nova+neutron setup cannot handle spawning 20 images concurrently

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282842

Title:
  default nova+neutron setup cannot handle spawning 20 images
  concurrently

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This breaks any @scale use of a cloud.

  Symptoms include 500 errors from 'nova list' (which causes a heat
  stack failure) and errors like 'unknown auth strategy' from
  neutronclient when its being called from the nova compute.manager.

  Sorry for the many-project-tasks here - its not clear where the bug
  lies, nor whether its bad defaults, or code handling errors, or perf
  tuning etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347025] Re: Iscsi connector always uses CONF.my_ip

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347025

Title:
  Iscsi connector always uses CONF.my_ip

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When attaching to a cinder volume, the virt drivers supply details
  about where the iscsi connection is going to come from. However, if
  your compute nodes have multiple network interfaces, there is no way
  to specify which one is going to be used for the iscsi traffic.

  It would be helpful if at least a config option allowed specifying the
  storage ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343200] Re: Add notifications when operating server groups

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343200

Title:
  Add notifications when operating server groups

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently, there is no notifications when operating server groups,
  such as create/delete/update etc. This caused 3rd party cannot know
  the operation result on time.

  We should add notification for server group operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292644] Re: nova.compute.api should return Objects

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292644

Title:
  nova.compute.api should return Objects

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova.compute.api should return Aggregate Objects, and they should be
  converted into the REST API expected results in the aggregates API
  extensions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1292644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238910] Re: nova boot operation with servers greater then 63 characters don't get DHCP address

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238910

Title:
  nova boot operation with servers greater then 63 characters don't get
  DHCP address

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If a server with greater then 63 characters is attempted to be booted,
  the servers will boot but fail to obtain DHCP addresses with nova
  networking and neutron.  dnsmsq is fully configured for the full
  server name, however, Linux has a 64 character hostname limit.

  [sdake@bigiron ~]$ getconf HOST_NAME_MAX
  64

  Recommend a nova exception if a server greater then 63 characters is
  created so the user is aware of the problem at the api level rather
  then wondering why their vm doesn't appear to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367363] Re: Libvirt-lxc will leak nbd devices on instance shutdown

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367363

Title:
  Libvirt-lxc will leak nbd devices on instance shutdown

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Shutting down a libvirt-lxc based instance will leak the nbd device.
  This happens because _teardown_container will only be called when
  libvirt domain's are running. During a shutdown, the domain is not
  running at the time of the destroy. Thus, _teardown_container is never
  called and the nbd device is never disconnected.

  Steps to reproduce:
  1) Create devstack using local.conf: 
https://gist.github.com/ramielrowe/6ae233dc2c2cd479498a
  2) Create an instance
  3) Perform ps ax |grep nbd on devstack host. Observe connected nbd device
  4) Shutdown instance
  5) Perform ps ax |grep nbd on devstack host. Observe connected nbd device
  6) Delete instance
  7) Perform ps ax |grep nbd on devstack host. Observe connected nbd device

  Nova has now leaked the nbd device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366982] Re: Exception NoMoreFixedIps doesn't show which network is out of IPs

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366982

Title:
  Exception NoMoreFixedIps doesn't show which network is out of IPs

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The exception NoMoreFixedIps in nova/exception.py has a very generic error 
message:
  "Zero fixed ips available."

  When performing a deploy with multiple networks, it can become
  difficult to determine which network has been exhausted.  Slight
  modification to this error message will help simplify the debug
  process for operators.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367151] Re: VMware: VM's created by VC 5.5 are no compatible with older clusters

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367151

Title:
  VMware: VM's created by VC 5.5 are no compatible with older clusters

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  VM's created by VC 5.0 and 5.1 will have hardware version 8. VM's
  created by VC 5.5 will have hardware version 10. This break
  compatibility between VM's on older clusters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367918] Re: Xenapi attached volume with no VM leaves instance in undeletable state

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367918

Title:
  Xenapi attached volume with no VM leaves instance in undeletable state

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  As shown by the stack trace below, when a volume is attached but the
  VM is not present the volume can't be cleaned up by Cinder and will
  raise an Exception which puts the instance into an error state.  The
  volume attachment isn't removed because an if statement is hit in the
  xenapi destroy method which logs "VM is not present, skipping
  destroy..." and then moves on to trying to cleanup the volume in
  Cinder.  This is because most operations in xen rely on finding the
  vm_ref and then cleaning up resources that are attached there.  But if
  the volume is attached to an SR but not associated with an instance it
  ends up being orphaned.

  
  014-08-29 15:54:02.836 8766 DEBUG nova.volume.cinder 
[req-341cd17d-0f2f-4d64-929f-a94f8c0fa295 None] Cinderclient connection created 
using URL: https://localhost/v1/
  cinderclient 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/volume/cinder.py:108
  2014-08-29 15:54:03.251 8766 ERROR nova.compute.manager 
[req-341cd17d-0f2f-4d64-929f-a94f8c0fa295 None] [instance: ] Setting 
instance vm_state to ERROR
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
Traceback (most recent call last):
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py",
 line 2443, in do_terminate_instance
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
self._delete_instance(context, instance, bdms, quotas)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File "/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/hooks.py", 
line 131, in inner
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] rv 
= f(*args, **kwargs)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py",
 line 2412, in delete_instance
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
quotas.rollback()
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/openstack/common/excutils.py",
 line 82, in exit
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
six.reraise(self.type, self.value, self.tb)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py",
 line 2390, in _delete_instance
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
self._shutdown_instance(context, instance, bdms)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py",
 line 2335, in _shutdown_instance
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
connector)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/volume/cinder.py", 
line 189, in wrapper
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
res = method(self, ctx, volume_id, *args, **kwargs)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/volume/cinder.py", 
line 309, in terminate_connection
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
connector)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/v1/volumes.py",
 line 331, in terminate_connection
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
{'connector': connector})
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/v1/volumes.py",
 line 250, in _action
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
return self.api.client.post(url, body=body)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/client.py",
 line 223, in post
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
return self._cs_request(url, 'POST', **kwargs)
  2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: ] 
File 
"/opt/rackstack/879.28/nova/lib/python2.6/s

[Yahoo-eng-team] [Bug 1367964] Re: Unable to recover from timeout of detaching cinder volume

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367964

Title:
  Unable to recover from timeout of detaching cinder volume

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When cinder-volume is under heavy load, RPC call for terminate_connection of 
cinder volumes may take more time than RPC timeout.
  When the timeout occurs, nova gives up the detaching volume and recover the 
volume state to 'in-use', but doesn't reattach volumes.
  This will make DB inconsistent state:

(1) libvirt is already detaches the volume from the instance
(2) cinder volume is disconnected from the host by terminate_connection RPC 
(but nova doesn't know this because of timeout)
(3) nova.block_device_mapping still remains because of timeout in (2)

  and the volume becomes impossible to re-attach or to detach completely.
  If volume-detach is issued again, it will fail by the exception 
exception.DiskNotFound:

  
  2014-07-17 10:58:17.333 2586 AUDIT nova.compute.manager 
[req-e251f834-9653-47aa-969c-b9524d4a683d f8c2ac613325450fa6403a89d48ac644 
4be531199d5240f79733fb071e090e46] [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Detach volume 
f7d90bc8-eb55-4d46-a2c4-294dc9c6a92a from mountpoint /dev/vdb
  2014-07-17 10:58:17.337 2586 ERROR nova.compute.manager 
[req-e251f834-9653-47aa-969c-b9524d4a683d f8c2ac613325450fa6403a89d48ac644 
4be531199d5240f79733fb071e090e46] [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Failed to detach volume 
f7d90bc8-eb55-4d46-a2c4-294dc9c6a92a from /dev/vdb
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Traceback (most recent call last):
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4169, in 
_detach_volume
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] encryption=encryption)
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1365, in 
detach_volume
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] raise 
exception.DiskNotFound(location=disk_dev)
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] DiskNotFound: No disk at vdb
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] 

  
  We should have the way to recover from this situation.

  For instance, we need to have something like "volume-detach --force"
  which ignores the DiskNotFound exception and continues to delete
  nova.block_device_mapping entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368260] Re: add pci_requests to the instance object

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368260

Title:
  add pci_requests to the instance object

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  With the refactoring work moving PCI requests from system_metadata to
  instance_extra, PCI requests are no longer accessible from the
  instance object. Refer to https://review.openstack.org/#/c/118391/.
  One of the issues is that the scheduler while consuming PCI requests
  (see consume_from_instance in host_manager.py) needs access to db to
  get the PCI requests. Another issue is that the compute node would
  need multiple DB accesses to get the PCI requests, while if they are
  part of the instances, they can become available together with the
  instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324005] Re: use real disk size to consider whether it's a resize down

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324005

Title:
  use real disk size to consider whether it's a resize down

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I have following flavor

  jichen@controller:~$ nova flavor-list
  
++---+---+--+---+--+---+-+---+
  | ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor 
| Is_Public |
  
++---+---+--+---+--+---+-+---+
  | 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 
| True  |
  | 11 | t.test1   | 512   | 1| 5 |  | 1 | 1.0 
| True  |
  | 12 | t.test2   | 512   | 1| 2 |  | 1 | 1.0 
| True  |

  I use 
  nova boot --config-drive True --flavor 11 --key_name mykey --image 
99ebce05-c5d2-4829-bc25-004a8d3f3efb  --nic 
net-id=45f1ac55-e6bc-444e-be8b-3112c84646a8  --ephemeral size=1 t9

  to boot a new instance ,so the eph disk is 1 G now

  if we resize to flavor 12, it will fail now, but we can compare the
  real eph disk size and see whether we an resize it or not

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357368] Re: Source side post Live Migration Logic cannot disconnect multipath iSCSI devices cleanly

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357368

Title:
  Source side post Live Migration Logic cannot disconnect multipath
  iSCSI devices cleanly

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When a volume is attached to a VM in the source compute node through
  multipath, the related files in /dev/disk/by-path/ are like this

  stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.a5-lun-24
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-24

  The information on its corresponding multipath device is like this
  stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:24 sdl 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:24 sdj 8:144 active undef running

  But when the VM is migrated to the destination, the related
  information is like the following example since we CANNOT guarantee
  that all nodes are able to access the same iSCSI portals and the same
  target LUN number. And the information is used to overwrite
  connection_info in the DB before the post live migration logic is
  executed.

  stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b5-lun-100
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-100

  stack@ubuntu-server13:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:100 sdf 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:100 sdg 8:144 active undef running

  As a result, if post live migration in source side uses ,  and 
 to find the devices to clean up, it may use 192.168.3.51, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100.
  However, the correct one should be 192.168.3.50, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 24.

  Similar philosophy in (https://bugs.launchpad.net/nova/+bug/1327497)
  can be used to fix it: Leverage the unchanged multipath_id to find
  correct devices to delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369516] Re: Convert libvirt driver test suites to use NoDBTestCase

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369516

Title:
  Convert libvirt driver test suites to use NoDBTestCase

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  A large number of libvirt test classes inherit from the TestCase
  class, which means they incur the overhead of database setup

  nova/tests/virt/libvirt/test_blockinfo.py:class 
LibvirtBlockInfoTest(test.TestCase):
  nova/tests/virt/libvirt/test_blockinfo.py:class 
DefaultDeviceNamesTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_dmcrypt.py:class 
LibvirtDmcryptTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
CacheConcurrencyTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
LibvirtConnTestCase(test.TestCase,
  nova/tests/virt/libvirt/test_driver.py:class HostStateTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
IptablesFirewallTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class NWFilterTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
LibvirtUtilsTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
LibvirtDriverTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
LibvirtVolumeUsageTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
LibvirtNonblockingTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_driver.py:class 
LibvirtVolumeSnapshotTestCase(test.TestCase):
  nova/tests/virt/libvirt/test_imagebackend.py:class 
EncryptedLvmTestCase(_ImageTestCase, test.TestCase):
  nova/tests/virt/libvirt/test_vif.py:class LibvirtVifTestCase(test.TestCase):

  Some of these do not even use the database so can be trivially
  changed. Others will need significant refactoring work to remove
  database access before they can be changed to NoDBTestCase

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368495] Re: 'type'/'mac_adrr' attribute of server's address field not converted to V2.1

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368495

Title:
  'type'/'mac_adrr' attribute of server's address field not converted to
  V2.1

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  For 'extended_ips'/'extended_ips_mac' extension, there are difference
  between V2 and V3 server show/index & server address index API
  response listed below-

  'address' field of V2->V3 server API response-

  "OS-EXT-IPS:type" -> "type"
  "OS-EXT-IPS-MAC:mac_addr" -> "mac_addr"

  Above attribute needs to be fixed in V2.1 to make it backward
  compatible with V2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368989] Re: service_update() should not set an RPC timeout longer than service.report_interval

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368989

Title:
  service_update() should not set an RPC timeout longer than
  service.report_interval

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova.servicegroup.drivers.db.DbDriver._report_state() is called every
  service.report_interval seconds from a timer in order to periodically
  report the service state.  It calls
  self.conductor_api.service_update().

  If this ends up calling
  nova.conductor.rpcapi.ConductorAPI.service_update(), it will do an RPC
  call() to nova-conductor.

  If anything happens to the RPC server (failover, switchover, etc.) by
  default the RPC code will wait 60 seconds for a response (blocking the
  timer-based calling of _report_state() in the meantime).  This is long
  enough to cause the status in the database to get old enough that
  other services consider this service to be "down".

  Arguably, since we're going to call service_update( ) again in
  service.report_interval seconds there's no reason to wait the full 60
  seconds.  Instead, it would make sense to set the RPC timeout for the
  service_update() call to to something slightly less than
  service.report_interval seconds.

  I've also submitted a related bug report
  (https://bugs.launchpad.net/bugs/1368917) to improve RPC loss of
  connection in general, but I expect that'll take a while to deal with
  while this particular case can be handled much more easily.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367344] Re: Libvirt Watchdog support is broken when ComputeCapabilitiesFilter is used

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367344

Title:
  Libvirt Watchdog support is broken when ComputeCapabilitiesFilter is
  used

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  In Progress
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  The doc (http://docs.openstack.org/admin-guide-cloud/content
  /customize-flavors.html , section "Watchdog behavior") suggests to use
  the flavor extra specs property called "hw_watchdog_action" to
  configure a watchdog device for libvirt guests. Unfortunately, this is
  broken due to ComputeCapabilitiesFilter trying to use this property to
  filter compute hosts, so that scheduling of a new instance always
  fails with NoValidHostFound error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367845] Re: Traceback is logged when delete an instance.

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367845

Title:
  Traceback is logged when delete an instance.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  OpenStack version: Icehouse

  Issue: traceback is logged in nova-compute.log when delete an
  instance.

  Setup: 3 nodes, namely Controller, Compute and Network; with nova-
  compute running solely on Compute node.

  Steps to reproduce:
  1. Create an instance using image cirrus 0.3.2.
  2. Verify instance is running: nova list
  3. Delete the instance: nova delete 
  4. Check nova-compute.log at Compute node.

  root@Controller:/home/guest# nova --version
  2.17.0
  root@Controller:/home/guest# nova service-list
  
+--++--+-+---++-+
  | Binary   | Host   | Zone | Status  | State | Updated_at 
| Disabled Reason |
  
+--++--+-+---++-+
  | nova-cert| Controller | internal | enabled | up| 
2014-09-10T17:12:34.00 | -   |
  | nova-conductor   | Controller | internal | enabled | up| 
2014-09-10T17:12:26.00 | -   |
  | nova-consoleauth | Controller | internal | enabled | up| 
2014-09-10T17:12:28.00 | -   |
  | nova-scheduler   | Controller | internal | enabled | up| 
2014-09-10T17:12:31.00 | -   |
  | nova-compute | Compute| nova | enabled | up| 
2014-09-10T17:12:34.00 | -   |
  
+--++--+-+---++-+
  root@Controller:/home/guest# nova boot --image cirros-0.3.2-x86_64 --flavor 1 
--nic net-id=75375f9b-0f26-4e1a-aedc-24457192f265 cirros
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-SRV-ATTR:host | -
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0046
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| jFmNDB5Jsd77 
  |
  | config_drive |  
  |
  | created  | 2014-09-10T17:13:06Z 
  |
  | flavor   | m1.tiny (1)  
  |
  | hostId   |  
  |
  | id   | bc01c570-c40f-4088-a17c-0278fc6c3315 
  |
  | image| cirros-0.3.2-x86_64 
(38f00c62-f9df-4133-abf2-7c9ba948d414) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | cirros   
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | progress | 0
  |
  | security_groups  | default

[Yahoo-eng-team] [Bug 1375379] Re: console: wrong check when verify the server response

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375379

Title:
  console: wrong check when verify the server response

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When trying to connect to a console with internal_access_path if the
  server does not respond by 200 we should raise an exception but the
  current code does not insure this case.

  
https://github.com/openstack/nova/blob/master/nova/console/websocketproxy.py#L68

  
  The method 'find' return -1 on failure not False or 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158552] Re: network_rpcapi.allocate_for_instance timing out under load.

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158552

Title:
  network_rpcapi.allocate_for_instance timing out under load.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Environment:

  * devstack  with nova Grizzly-RC1
  *  'compute_driver = nova.virt.fake.FakeDriver' in nova.conf
  * following nova branch https://github.com/jogo/nova/commits/perf_logging 
(commit c55372908de3e66bc000ebb9dd17f688b4914101)
  * no quantum

  when trying to run 50 VMs  in devstack using 'euca-run-instances
  ami-0003 -t m1.micro -n 100'

  I see network_rpcapi.allocate_for_instance occasionally RPC timing out
  (call from nova-compute to nova-network).

  nova-compute log http://paste.openstack.org/show/34252/
  nova-network log http://paste.openstack.org/show/34255/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1158552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326238] Re: libvirt: use qdisk instead of blktap for Xen

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326238

Title:
  libvirt: use qdisk instead of blktap for Xen

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Using libvirt and Xen, nova is using "tap" (Xen 4.0) and "tap2" (Xen >
  4.0) for the disk driver.

  According to the Xen documentation and ML, the usage of qdisk is preferred
  against blktap2 [1].

  [1] http://lists.xen.org/archives/html/xen-devel/2013-08/msg02633.html

  Moreover, libxenlight (xl, libxl) is the replacement of xend (xm) from Xen >=
  4.2. According to the disk configuration documentation for xl [2], the device
  driver "(...) should not be specified, in which case libxl will automatically
   determine the most suitable backend."

  [2] http://xenbits.xen.org/docs/unstable/misc/xl-disk-
  configuration.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361797] Re: unused code in pci_manager.get_instance_pci_devs()

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361797

Title:
  unused code in pci_manager.get_instance_pci_devs()

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  def get_instance_pci_devs(inst):
  """Get the devices assigned to the instances."""
  if isinstance(inst, objects.Instance):
  return inst.pci_devices
  else:
  ctxt = context.get_admin_context()
  return objects.PciDeviceList.get_by_instance_uuid(
  ctxt, inst['uuid'])

  In the above code, the else part may not be used by the normal code
  flow. Removing it may break some of the unit tests. Thus fix is also
  needed in the unit test code that is using it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361490] Re: param check for backup rotatetype is needed

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361490

Title:
  param check for backup rotatetype is needed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  jichen@cloudcontroller:~$ nova backup jitest1 jiback1  2
  jichen@cloudcontroller:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | cb7c6742-7b7a-44de-ad5a-8570ee520f9e | jitest1 | ACTIVE | -  | 
Running | private=10.0.0.2   |
  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 | jitest3 | PAUSED | -  | 
Paused  | private=10.0.0.200 |
  
+--+-+++-++

  
  however , after I proposed solution to change v2, we got suggestion that this 
value is only used in glance so we should delegate it to glance to check 
whether it's proper or not 
  so the final solution is to let it be and ignore the error list above

  currently v2.1 (v3) API has validation for it. the proposed patch will
  remove it

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316079] Re: Migrate attached volume failed

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316079

Title:
  Migrate attached volume failed

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In nova compute log, met a exception when migrating  the attached volume.
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 301, in 
decorated_function
  return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 4337, in swap_volume
  new_volume_id)
  File "/opt/stack/nova/nova/compute/manager.py", line 4317, in _swap_volume
  mountpoint)
  File "/opt/stack/nova/nova/volume/cinder.py", line 173, in wrapper
  res = method(self, ctx, volume_id, *args, **kwargs)
  File "/opt/stack/nova/nova/volume/cinder.py", line 261, in attach
  mountpoint)
  File "/opt/stack/python-cinderclient/cinderclient/v1/volumes.py", line 266, 
in attach
  'mode': mode})
  File "/opt/stack/python-cinderclient/cinderclient/v1/volumes.py", line 250, 
in _action
  return self.api.client.post(url, body=body)
  File "/opt/stack/python-cinderclient/cinderclient/client.py", line 223, in 
post
  return self._cs_request(url, 'POST', **kwargs)
  File "/opt/stack/python-cinderclient/cinderclient/client.py", line 187, in 
_cs_request
  **kwargs)
  File "/opt/stack/python-cinderclient/cinderclient/client.py", line 170, in 
request
  raise exceptions.from_response(resp, body)
  VolumeNotFound: Volume fdb681d1-2de9-4193-8f4e-775d21301512 could not be 
found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1316079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363901] Re: HTTP 500 is returned when using an in-used fixed ip to attach interface

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363901

Title:
  HTTP 500 is returned when using an in-used fixed ip to attach
  interface

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When I post an 'attach interface' request to Nova with an in-used
  fixed ip, Nova returns an HTTP 500 error and a confusing error
  message.

  REQ: curl -i 
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface'
 -X POST -H "Accept: application/json" -H "Content-Type: application/json" -H 
"User-Agent: python-novaclient" -H "X-Auth-Project-Id: Public" -H 
"X-Auth-Token: {SHA1}f04a301215d1014df8a0c7a32818235c2c5fbd1a" -d 
'{"interfaceAttachment": {"fixed_ips": [{"ip_address": "10.100.99.4"}], 
"net_id": "173854d5-333f-4c78-b5a5-10d2e9c8d827"}}'
  INFO (connectionpool:187) Starting new HTTP connection (1): 10.90.10.24
  DEBUG (connectionpool:357) "POST 
/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface
 HTTP/1.1" 500 128
  RESP: [500] {'date': 'Mon, 01 Sep 2014 09:02:24 GMT', 'content-length': 
'128', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-abcdfaab-c208-4089-9e2e-d63bed1e8dfa'}
  RESP BODY: {"computeFault": {"message": "The server has either erred or is 
incapable of performing the requested operation.", "code": 500}}

  
  In fact, Nova works perfect well. The error is caused by my incorrect input. 
Neutron client can return an IpAddressInUseClient exception, so that Nova 
should be able to address the error and return an HTTP 400 error in order to to 
inform the user to correct the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294939] Re: Add a fixed IP to an instance failed

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294939

Title:
  Add a fixed IP to an instance failed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  +--+---+-+
  | ID   | Label | CIDR|
  +--+---+-+
  | be95de64-a2aa-42de-a522-37802cdbe133 | vmnet | 10.0.0.0/24 |
  | 0fd904f5-1870-4066-8213-94038b49be2e | abc   | 10.1.0.0/24 |
  | 7cd88ead-fd42-4441-9182-72b3164c108d | abd   | 10.2.0.0/24 |
  +--+---+-+

  nova  add-fixed-ip test15 0fd904f5-1870-4066-8213-94038b49be2e

  failed with following logs

  
  2014-03-19 03:29:30.546 7822 ERROR nova.openstack.common.rpc.amqp 
[req-fd087223-3646-4fed-b0f6-5a5cf50828eb d6779a827003465db2d3c52fe135d926 
45210fba73d24dd681dc5c292c6b1e7f] Exception during message handling
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp **args)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 772, in 
add_fixed_ip_to_instance
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self._allocate_fixed_ips(context, instance_id, host, [network])
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 214, in 
_allocate_fixed_ips
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
vpn=vpn, address=address)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 881, in 
allocate_fixed_ip
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
self.quotas.rollback(context, reservations)
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/network/manager.py", line 859, in 
allocate_fixed_ip
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp 
'virtual_interface_id': vif['id']}
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp TypeError: 
'NoneType' object is unsubscriptable
  2014-03-19 03:29:30.546 7822 TRACE nova.openstack.common.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292733] Re: Ironic: unplugging of instance VIFs fails if no VIFs associated with port

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292733

Title:
  Ironic: unplugging of instance VIFs fails if no VIFs associated with
  port

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  During instance spawn, Ironic attempts to unplug any plugged VIFs from
  ports associated with an instance.  If there are no associated VIFs to
  unplug, instance spawn fails with a nova-compute errror:

  2014-03-14 21:15:35.907 16640 TRACE nova.openstack.common.loopingcall
  HTTPBadRequest: Couldn't apply patch '[{'path': '/extra/vif_port_id',
  'op': 'remove'}]'. Reason: u'vif_port_id'

  The driver should be only attempt to unplug VIFs from ports that
  actually have them associated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1292733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371406] Re: should error when unshelve an volume backed instance which the image was deleted

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371406

Title:
  should error when unshelve an volume backed instance which the image
  was deleted

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  shelve an instance first.
  nova will create an snapshot of image with instance name-shelved
  like: t2-shelved

  manually delete the image with 
  nova image-delte t2-shelved

  then do an unshelve, it should report image not found error and mark the 
instance 
  vm_state ERROR

  
  this is the output of glance in conductor, the image is deleted, but nova 
don't get the deleted status of image.

  2014-09-19 11:12:20.534 DEBUG glanceclient.common.http 
[req-d2aaa493-b507-4f75-a185-c83d347f82b0 admin admin] 
  HTTP/1.1 200 OK
  content-length: 0
  x-image-meta-id: 2a6ce744-b96b-4f71-882e-13c4fc1c9da1
  date: Fri, 19 Sep 2014 03:12:20 GMT
  x-image-meta-deleted: True  <<<===
  x-image-meta-container_format: ami
  x-image-meta-checksum: 1b31d2e911494696c6d190ccef2f4d64
  x-image-meta-deleted_at: 2014-09-19T02:41:27
  x-image-meta-min_disk: 0
  x-image-meta-protected: False
  x-image-meta-created_at: 2014-09-18T04:03:09
  x-image-meta-size: 10616832
  x-image-meta-status: deleted <<<===
  etag: 1b31d2e911494696c6d190ccef2f4d64
  x-image-meta-is_public: False
  x-image-meta-min_ram: 0
  x-image-meta-owner: d7beb7f28e0b4f41901215000339361d
  x-image-meta-updated_at: 2014-09-19T02:41:27
  content-type: text/html; charset=UTF-8
  x-openstack-request-id: req-e95ddab3-bee5-4847-ba55-13be02bc1a14
  x-image-meta-disk_format: ami
  x-image-meta-name: t2-shelved

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357263] Re: Unhelpful error message when attempting to boot a guest with an invalid guestId

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357263

Title:
  Unhelpful error message when attempting to boot a guest with an
  invalid guestId

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When booting a VMware instance from an image, guestId is taken from
  the vmware_ostype property in glance. If this value is invalid,
  spawn() will fail with the error message:

  VMwareDriverException: A specified parameter was not correct.

  As there are many parameters to CreateVM_Task, this error message does
  not help us narrow down the offending one. Unfortunately this error
  message is all that vSphere provides us, so we can't do better by
  relying on vSphere alone.

  As this is a user-editable parameter, we should try harder to provide
  an indication of what the error might be. We can do this by validating
  the field ourselves. As there is no way I'm aware of to extract a
  canonical list of valid guestIds from a running vSphere host, I think
  we're left embedding our own list and validating against it. This is
  not ideal, because:

  1. We will need to update our list for every ESX release
  2. A simple list will not take account of the ESX version we're running 
against (i.e. we may have a list for 5.5, but be running against 5.1, which 
doesn't support everything on our list)

  Consequently, to maintain a loose coupling we should validate the
  field, but only warn for values we don't recognise. vSphere will
  continue to return its non-specific error message, but there will be
  an additional indication of what the root cause might be in the logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375467] Re: db deadlock on _instance_update()

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 887, 
in instance_update\
  service)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 
139, in inner\
  return func(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
447, in __exit__\
  self.rollback()\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", 
line 58, in __exit__\
  compat.reraise(exc_type, exc_value, exc_tb)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
444, in __exit__\
  self.commit()\
  File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 443, in _wrap\
  _raise_if_deadlock_error(e, self.bind.dialect.name)\
  File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 427, in _raise_if_deadlock_error\
  raise exception.DBDeadlock(operational_error)\
  DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get 
lock; try restarting transaction\') None None\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293480] Re: Reboot host didn't restart instances due to libvirt lifecycle event change instance's power_stat as shutdown

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293480

Title:
  Reboot host  didn't restart instances due to  libvirt lifecycle event
  change instance's power_stat as shutdown

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  1. Libvirt driver can receive libvirt lifecycle events(registered in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1004),
  then handle it in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L969
  , that means  shutdown a domain  will  send out shutdown lifecycle
  event and nova compute will try to sync the instance's power_state.

  2. When reboot compute service ,  compute service is trying to reboot 
instance which were running before reboot.
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.  
Compute service only checks the power_state in database. the value of 
power_state can be changed in 3.  That leads out  reboot host, some instances 
which were running before reboot can't be restarted.

  3. When reboot the host,  the code path like  1)libvirt-guests will
  shutdown all the domain,   2)then sendout  lifecycle event , 3)nova
  compute receive it and 4)save power_state 'shutoff' in db , 5)then try
  to stop it.   Compute service may be killed in any step,  In my test
  enviroment,  two running instances , only one instance was restarted
  succefully. another was set power_state with 'shutoff', task_state
  with 'power off' in  step 4) .  So it can't pass the check in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.
  won't be restarted.

  
  Not sure this is a bug ,  wonder if there is solution for this .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334857] Re: EC2 metadata retuns ip of instance and ip of nova-api service node

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334857

Title:
  EC2 metadata retuns ip of instance and ip of nova-api service node

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  curl -vL http://169.254.169.254/latest/meta-data/local-ipv4/
  * About to connect() to 169.254.169.254 port 80 (#0)
  *   Trying 169.254.169.254... connected
  * Connected to 169.254.169.254 (169.254.169.254) port 80 (#0)
  > GET /latest/meta-data/local-ipv4/ HTTP/1.1
  > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 
NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
  > Host: 169.254.169.254
  > Accept: */*
  > 
  < HTTP/1.1 200 OK
  < Content-Type: text/html; charset=UTF-8
  < Content-Length: 12
  < Date: Sun, 22 Jun 2014 15:15:52 GMT
  < 
  * Connection #0 to host 169.254.169.254 left intact
  * Closing connection #0
  192.168.0.22, 10.2.0.50

  192.168.0.22 - instance ip 10.2.0.50 - controller ip

  Happens only for /latest/meta-data/local-ipv4/

  Quick investigation shows that the issue is caused by

  https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L243
  (https://github.com/openstack/nova/blo...)

  'local-ipv4': self.address or fixed_ip, string

  self.address variable contains "192.168.0.22, 10.2.0.50" while
  fixed_ip contains correct "192.168.0.22" value.

  The workaround is: swapping those two variables: 'local-ipv4':
  fixed_ip or self.address, and restart of all nova-compute services

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376586] Re: pre_live_migration is missing some disk information in case of block migration

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376586

Title:
  pre_live_migration is missing some disk information in case of block
  migration

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The pre_live_migration API is called with a disk retrieved by a call
  to driver.get_instance_disk_info when doing a block migration.
  Unfortunately block device information is not passed, so Nova is
  calling LibvirtDriver._create_images_and_backing with partial
  disk_info.

  As a result, for example when migrating a volume with a NFS volume
  attached, a useless file is created in the instance directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333145] Re: quota-usage error in soft-delete

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333145

Title:
  quota-usage error in soft-delete

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  condition: reclaim_instance_interval > 0 in nova.conf

  I am testing soft-delete , found that quota_usages table will lead to error
  result. when an instance was soft-deleted , before it was deleted completely 
by period task i soft-delete it again, the quato_usages table will reduce 
double resource of this instance.

  the reason is that every execution of soft-delete instance,  the
  reservation will commit.

  how to fixed it: we should make reservations=None when
  instance.vm_state='soft-deleted'.

  
  how to reproduct it:

  i am project_id='30528b0d602c4a9c9d8b4cd3d416d710', and I have an
  instance:

  ubuntu@xfolsom:/opt/stack/nova$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 6f6c1258-6eda-43f1-9531-7a4eb0b44724 | test | ACTIVE | -  | Running 
| private=10.0.0.2 |
  
+--+--+++-+--+

  1.first select from quota_usage, the result is :

  mysql> select * from quota_usages;
  
+-+-+++--+-++--+---+-+--+
  | created_at  | updated_at  | deleted_at | id | project_id
   | resource| in_use | reserved | until_refresh | 
deleted | user_id  |
  
+-+-+++--+-++--+---+-+--+
  | 2014-06-20 08:24:35 | 2014-06-23 08:35:03 | NULL   |  1 | 
30528b0d602c4a9c9d8b4cd3d416d710 | instances   |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:35 | 2014-06-23 08:35:03 | NULL   |  2 | 
30528b0d602c4a9c9d8b4cd3d416d710 | ram | 64 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:35 | 2014-06-23 08:35:03 | NULL   |  3 | 
30528b0d602c4a9c9d8b4cd3d416d710 | cores   |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:35 | 2014-06-20 08:24:35 | NULL   |  4 | 
30528b0d602c4a9c9d8b4cd3d416d710 | security_groups |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:36 | 2014-06-23 03:56:03 | NULL   |  5 | 
30528b0d602c4a9c9d8b4cd3d416d710 | fixed_ips   |  1 |0 |
  NULL |   0 | NULL |
  
+-+-+++--+-++--+---+-+--+
  5 rows in set (0.00 sec)

  2.using nova-network, set reclaim_instance_interval=600 in nova.conf.
  3.nova delete 6f6c1258-6eda-43f1-9531-7a4eb0b44724
  4. select from quota_usages, result is :

  mysql> select * from quota_usages;
  
+-+-+++--+-++--+---+-+--+
  | created_at  | updated_at  | deleted_at | id | project_id
   | resource| in_use | reserved | until_refresh | 
deleted | user_id  |
  
+-+-+++--+-++--+---+-+--+
  | 2014-06-20 08:24:35 | 2014-06-23 08:42:30 | NULL   |  1 | 
30528b0d602c4a9c9d8b4cd3d416d710 | instances   |  0 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:35 | 2014-06-23 08:42:30 | NULL   |  2 | 
30528b0d602c4a9c9d8b4cd3d416d710 | ram |  0 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:35 | 2014-06-23 08:42:30 | NULL   |  3 | 
30528b0d602c4a9c9d8b4cd3d416d710 | cores   |  0 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
  | 2014-06-20 08:24:35 | 2014-06-20 08:24:

[Yahoo-eng-team] [Bug 1278736] Re: Strings passed to InvalidAggregateAction should be translated

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278736

Title:
  Strings passed to InvalidAggregateAction should be translated

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  InvalidAggregateAction .looks like:

   class InvalidAggregateAction(Invalid):
   msg_fmt = _("Cannot perform action '%(action)s' on aggregate "
"%(aggregate_id)s. Reason: %(reason)s.")

  
  The values for action are:

delete
add_host_to_aggregate
update aggregate
update aggregate metadata

  Also we use 'not empty' untranslated as a reason string in one place

  We should standardize these strings a little more and always translate
  them

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363326] Re: Error retries in _allocate_network

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363326

Title:
  Error retries in _allocate_network

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In /nova/compute/manager.py/def _allocate_network_async,line 1559.

  attempts = retries > 1 and retries + 1 or 1
  retry_time = 1
  for attempt in range(1, attempts + 1):

  Variable attempts wants to determine the retry times of allocate network,but 
it made a small mistake. 
  See the Simulation results below:
  retries=0,attempts=1
  retries=1,attempts=1
  retries=2,attempts=3
  When retries=1, attempts=1 ,It actually does not retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279857] Re: RFE: libguestfs logging should be connected up to openstack logging

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279857

Title:
  RFE: libguestfs logging should be connected up to openstack logging

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  https://bugzilla.redhat.com/show_bug.cgi?id=1064948

  We were trying to chase up a bug in libguestfs integration with
  OpenStack.  It was made much harder because the only way to diagnose
  the bug was to manually run the nova service after manually setting
  environment variables:
  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs

  It would be much nicer if:

  (1) There was a Nova setting to enable debugging, like:
libguestfs_debug = 1
  or something along those lines.

  (2) Nova used the events API to collect libguestfs debug messages
  and push them into Openstack's own logging system.  See code
  example below.

  -

  Here is how you enable logging programmatically and capture
  the log messages.

  (a) As soon as possible after creating the guestfs handle, call
  either (or better, both) of these functions:

  g.set_trace (1) # just traces libguestfs API calls
  g.set_verbose (1)   # verbose debugging

  (b) Register an event handler like this:

  events = guestfs.EVENT_APPLIANCE | guestfs.EVENT_LIBRARY \
   | guestfs.EVENT_WARNING | guestfs.EVENT_TRACE
  g.set_event_callback (log_callback, events)

  (c) The log_callback function should look something like this:

  def log_callback (ev,eh,buf,array):
  if ev == guestfs.EVENT_APPLIANCE:
  buf = buf.rstrip()
  # What just happened?
  LOG.debug ("event=%s eh=%d buf='%s' array=%s" %
 (guestfs.event_to_string (ev), eh, buf, array))

  There is a fully working example here:

  https://github.com/libguestfs/libguestfs/blob/master/python/t/420-log-
  messages.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356552] Re: Live migration: "Disk of instance is too large" when using a volume stored on NFS

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356552

Title:
  Live migration: "Disk of instance is too large" when using a volume
  stored on NFS

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When live-migrating an instance that has a Cinder volume (stored on
  NFS) attached, the operation fails if the volume size is bigger than
  the space left on the destination node. This should not happen, since
  this volume does not have to be migrated. Here is how to reproduce the
  bug on a cluster with one control node and two compute nodes, using
  the NFS backend of Cinder.

  
  $ nova boot --flavor m1.tiny --image 173241e-babb-45c7-a35f-b9b62e8ced78 
test_vm
  ...

  $ nova volume-create --display-name test_volume 100
  ...
  | id  | 6b9e1d03-3f53-4454-add9-a8c32d82c7e6 |
  ...

  
  $ nova volume-attach test_vm  6b9e1d03-3f53-4454-add9-a8c32d82c7e6 auto
  ...

  $ nova show test_vm | grep OS-EXT-SRV-ATTR:host
  | OS-EXT-SRV-ATTR:host | t1-cpunode0  
  |

  $ nova service-list | grep nova-compute
  | nova-compute | t1-cpunode0 | nova | enabled | up| 
2014-08-13T19:14:40.00 | -   |
  | nova-compute | t1-cpunode1 | nova | enabled | up| 
2014-08-13T19:14:41.00 | -   |

  Now, let's say I want to live-migrate test_vm to t1-cpunode1:

  $ nova live-migration --block-migrate test_vm t1-cpunode1
  ERROR: Migration pre-check error: Unable to migrate 
a0d9c991-7931-4710-8684-282b1df4cca6: Disk of instance is too large(available 
on destination host:46170898432 < need:108447924224) (HTTP 400) (Request-ID: 
req-b4f00867-df51-44be-8f97-577be385d536)

  
  In nova/virt/libvirt/driver.py, _assert_dest_node_has_enough_disk() calls 
get_instance_disk_info(), which in turn, calls _get_instance_disk_info(). In 
this method, we see that volume devices are not taken into account when 
computing the amount of space needed to migrate an instance:

  ...
  if disk_type != 'file':
  LOG.debug('skipping %s since it looks like volume', path)
  continue

  if target in volume_devices:
  LOG.debug('skipping disk %(path)s (%(target)s) as it is a '
'volume', {'path': path, 'target': target})
  continue
  ...

  But for some reason, we never get into these conditions.

  If we ssh the compute where the instance currently lies, we can get
  more information about it:

  $ virsh dumpxml 11
  ...
  



6b9e1d03-3f53-4454-add9-a8c32d82c7e6


  
  ...

  The disk type is "file", which might explain why this volume is not
  skipped in the code snippet shown above. When we use the default
  Cinder backend, we get something such as:

  



47ecc6a6-8af9-4011-a53f-14a71d14f50b


  

  
  I think that the code in LibvirtNFSVolumeDriver.connect_volume() might be 
wrong: conf.source_type should be set to something else than "file" (and some 
other changes might be needed), but I must admit I'm not a libvirt expert.

  Any thoughts ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332133] Re: Description is mandatory parameter when creating Security Group

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332133

Title:
  Description is mandatory parameter when creating Security Group

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Steps to reproduce:
  1. Create security group.

  Actual result:
  Description is mandatory parameter when creating Security Group.

  Expected result:
  Description should not be mandatory parameter when creating Security Group.

  Explanation:
  1. Description is not mandatory information.
  2. Inconsistency with other Open Stack items (any other item in Open Stack 
doesn't require to enter Description mandatory).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360426] Re: bulk floating ip extension is missing instance_uuid and fixed_ip

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360426

Title:
  bulk floating ip extension is missing instance_uuid and fixed_ip

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The bulk floating ip extension doesn't show instance_uuid and fixed_ip
  like the regular floating ip extension. The instance_uuid omission is
  due to a bug relating to object conversion, and fixed_ip was
  inadvertently left out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340411] Re: Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral RBD

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340411

Title:
  Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral
  RBD

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Greetings,

  
  We can't seem to be able to evacuate instances from a failed compute node 
using shared storage. We are using Ceph Ephemeral RBD as the storage medium.

  
  Steps to reproduce:

  nova evacuate --on-shared-storage 6e2081ec-2723-43c7-a730-488bb863674c node-24
  or
  POST  to http://ip-address:port/v2/tenant_id/servers/server_id/action with 
  {"evacuate":{"host":"node-24","onSharedStorage":1}}

  
  Here is what shows up in the logs:

  
  180>Jul 10 20:36:48 node-24 nova-nova.compute.manager AUDIT: Rebuilding 
instance
  <179>Jul 10 20:36:48 node-24 nova-nova.compute.manager ERROR: Setting 
instance vm_state to ERROR
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5554, 
in _error_out_instance_on_exception
  yield
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage
  <179>Jul 10 20:36:49 node-24 nova-oslo.messaging.rpc.dispatcher ERROR: 
Exception during message handling: Invalid state of instance files on shared 
storage
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply
  incoming.message))
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in 
wrapped
  payload)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in 
wrapped
  return f(self, context, *args, **kw)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 274, 
in decorated_function
  pass
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, 
in decorated_function
  function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, 
in decorated_function
  e, sys.exc_info())
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362733] Re: Rebuilding a node in ERROR state should set status to REBUILD

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362733

Title:
  Rebuilding a node in ERROR state should set status to REBUILD

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I recently had a few nova-driven ironic nodes fail to deploy, and
  resurrected them by issuing another nova rebuild.

  This worked quite nicely, but the Status stayed as ERROR, when I would
  have expected it to change back to REBUILD

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1362733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307791] Re: Volumes still in use after deleting a shelved instance with user volumes

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1307791

Title:
  Volumes still in use after deleting a shelved instance with user
  volumes

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  After deleting a shelved instance with user volumes,  the volumes
  should be detached. but actually, the volumes are still in state of
  "in-use".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1307791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356167] Re: add monitoring on resume_instance

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356167

Title:
  add monitoring on resume_instance

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The existing monitoring is only at the end of the resume_instance, I think we 
should add monitoring on both begin and end, it  is
  very convenient to check problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350542] Re: resource tracker reports negative value for free hard disk space

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350542

Title:
  resource tracker reports negative value for free hard disk space

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When overcommiting on hard disk usage, the audit logs report negative
  amounts of free disk space. While technically correct, it may confuse
  the user, or make the user think there is something wrong with the
  tracking.

  The patch to fix this will be in a similar vein to
  https://review.openstack.org/#/c/93261/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361487] Re: backup operation cannot be done in pause and suspend state

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361487

Title:
  backup operation cannot be done in pause and suspend state

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:

  jichen@cloudcontroller:~$ nova backup jitest3 jiback1 daily 2
  ERROR (Conflict): Cannot 'createBackup' while instance is in vm_state paused 
(HTTP 409) (Request-ID: req-7554dea8-92aa-480c-a1f4-e3d7e479c6b3)
  jichen@cloudcontroller:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | cb7c6742-7b7a-44de-ad5a-8570ee520f9e | jitest1 | ACTIVE | -  | 
Running | private=10.0.0.2   |
  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 | jitest3 | PAUSED | -  | 
Paused  | private=10.0.0.200 |
  
+--+-+++-++

  
  jichen@cloudcontroller:~$ nova image-create  --show jitest3 test3image1
  +-+--+
  | Property| Value|
  +-+--+
  | OS-EXT-IMG-SIZE:size| 0|
  | created | 2014-08-26T04:06:41Z |
  | id  | 96a5284c-5feb-4231-8b01-9a522a7c5aab |
  | metadata base_image_ref | 94e061fb-e628-4deb-901c-9d44c059ecd9 |
  | metadata clean_attempts | 2|
  | metadata image_type | snapshot |
  | metadata instance_type_ephemeral_gb | 0|
  | metadata instance_type_flavorid | 1|
  | metadata instance_type_id   | 2|
  | metadata instance_type_memory_mb| 512  |
  | metadata instance_type_name | m1.tiny  |
  | metadata instance_type_root_gb  | 1|
  | metadata instance_type_rxtx_factor  | 1.0  |
  | metadata instance_type_swap | 0|
  | metadata instance_type_vcpus| 1|
  | metadata instance_uuid  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 |
  | metadata kernel_id  | 20be8b63-5a84-4440-a0bd-8f69898d5965 |
  | metadata ramdisk_id | 07f6f85f-c1dc-4790-98b5-14ab86f21b59 |
  | metadata user_id| 256dc6db4b5c45ae90fee8132cbaad7c |
  | minDisk | 1|
  | minRam  | 0|
  | name| test3image1  |
  | progress| 25   |
  | server  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 |
  | status  | SAVING   |
  | updated | 2014-08-26T04:06:41Z |
  +-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365579] Re: HTTP 500 is returned when using an invalid fixed ip to attach interface

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365579

Title:
  HTTP 500 is returned when using an invalid fixed ip to attach
  interface

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When I post an 'attach interface' request to Nova with an invalid
  fixed ip, Nova returns an HTTP 500 error and a confusing error
  message.

  REQ: curl -i 
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/1b1618fa-ddbd-4fce-aa04-720a72ec7dfe/os-interface'
 -X POST -H "Accept: application/json" -H "Content-Type: application/json" -H 
"User-Agent: python-novaclient" -H "X-Auth-Project-Id: Public" -H 
"X-Auth-Token: {SHA1}7b9d24c40fa509ff9ae6950a201cb7f12b7da165" -d 
'{"interfaceAttachment": {"fixed_ips": [{"ip_address": "abcd"}], "net_id": 
"173854d5-333f-4c78-b5a5-10d2e9c8d827"}}'
  INFO (connectionpool:187) Starting new HTTP connection (1): 10.90.10.24
  DEBUG (connectionpool:357) "POST 
/v2/19abae5746b242d489d1c2862b228d8b/servers/1b1618fa-ddbd-4fce-aa04-720a72ec7dfe/os-interface
 HTTP/1.1" 500 128
  RESP: [500] {'date': 'Thu, 04 Sep 2014 16:06:49 GMT', 'content-length': 
'128', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-7053d4e0-59df-46ca-9f55-63a2f1f2d412'}
  RESP BODY: {"computeFault": {"message": "The server has either erred or is 
incapable of performing the requested operation.", "code": 500}}

  In fact, Nova works perfect well. The error is caused by my incorrect
  input. Nova should be able to address this incorrect input, and return
  an HTTP 400 error in order to to inform the user to correct the
  request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343924] Re: Fail to create Vm by image when with a volume which name is 'vda'

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343924

Title:
  Fail to create Vm by image  when   with a volume which name is 'vda'

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I use  image to create VM and with a volume which name is 'vda', I
  find that the VM was created use  the volume not use the image, so I
  think it  need to determine the volume name can't for the 'vda'.

  nova boot --flavor m1.tiny --image 5575e1ee-734c-
  4eb4-a2a8-bb3ac29f338b  --nic net-id=08df3f49-0c03-44b7-b20e-
  36391923b415 --block-device  'source=volume,id=5d6ce95f-da32-4c36
  -a1bb-dc074501ed96,dest=volume,device=/dev/vda'  instance_test

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361419] Re: Hyper-V driver should provide a more detailed exception in case block storage volumes cannot be mounted due to a invalid SAN policy

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361419

Title:
  Hyper-V driver should provide a more detailed exception in case block
  storage volumes cannot be mounted due to a invalid SAN policy

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  On some editions of Windows Server / Hyper-V server the SAN policy is
  set by default to Online All, bringing online any disk, local or
  shared, attached to the host.

  Since only offline disks can be attached as passthrough disks to a
  Hyper-V VM, this prevents Cinder volumes from being attached to
  instances, resulting in an exception:

  NotFound: Unable to find a mounted disk for target_iqn:
  iqn.2010-10.org.openstack:volume-d8904a90-d189-4fc8-a7b4-4fcdc7309166

  Since this can be an issue not easy to troubleshoot without knowing
  the specific context, it'd be useful to include a reference to the SAN
  policy in the exception message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343613] Re: Deadlock found when trying to get lock; try restarting transaction

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343613

Title:
  Deadlock found when trying to get lock; try restarting transaction

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Incomplete

Bug description:
  Example URL:
  
http://logs.openstack.org/31/107131/1/gate/gate-grenade-dsvm/d019d8e/logs/old/screen-n-api.txt.gz?level=ERROR#_2014-07-17_20_59_37_031

  Logstash query(?):
  message:"Deadlock found when trying to get lock; try restarting transaction" 
AND loglevel:"ERROR" AND build_status:"FAILURE"

  32 hits in 48 hours.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365228] Re: Rename cli variable in ironic driver

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365228

Title:
  Rename cli variable in ironic driver

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In nova/virt/ironic/driver.py there is the IronicDriver class.  It
  abbreviates references to the ironicclient as 'icli'.  This should be
  unabbreviated to make the code clearer.

  This came up as part of
  https://review.openstack.org/#/c/111425/19/nova/virt/ironic/driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397247] Re: test_notifications.py:test_send_on_vm_change

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397247

Title:
  test_notifications.py:test_send_on_vm_change

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  in nova.tests.unit.test_notifications.py:
  the "test_send_on_vm_change" is just the same the "test_send_task_change",
  we should change the "test_send_on_vm_change":
  self.instance.task_state=task_state.SPAWING
  ==>
  self.instance.vm_state=vm_state.SUSPENDING

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178541] Re: Inter cell communication doesn't support multiple rabbit servers / HA

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178541

Title:
  Inter cell communication doesn't support multiple rabbit servers / HA

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When nova-cells talks to other cells rabbits there is no way to
  specify multiple servers and use the HA / mirrored queues with rabbit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1178541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393135] Re: Developer Docs: devref/rpc spelling mistakes

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393135

Title:
  Developer Docs: devref/rpc spelling mistakes

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  While going through develper reference of openstack-nova, i found spelling 
mistakes in "AMPQ and Nova" section.
  rpc is mis-spelled as rcp.call, rp.call, rp.cast in the section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1393135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324041] Re: nova-compute cannot restart if _init_instance failed

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324041

Title:
  nova-compute cannot restart if _init_instance failed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In my openstack, because of the interruption of power supply, my
  compute nodes crash . Then , i  start my compute nodes, and the start
  the nova-compute service. Unfortunately , i cannot start nova-compute
  service. I checked the compute.log , found something error like
  follows:

  2014-05-28 16:21:12.558 2724 DEBUG nova.compute.manager [-] [instance: 
ac57aab0-1864-4335-aa4a-bbfcc75a9624] Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:1043
  2014-05-28 16:21:12.563 2724 DEBUG nova.compute.manager [-] [instance: 
ac57aab0-1864-4335-aa4a-bbfcc75a9624] Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:1043
  2014-05-28 16:21:12.567 2724 DEBUG nova.virt.libvirt.vif [-] vif_type=bridge 
instance= 
vif=VIF({'ovs_interfaceid': None, 'network': Network({'bridge': 
u'brqf29d33d2-7c', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 
4, 'type': u'fixed', 'floating_ips': [IP({'meta': {}, 'version': 4, 'type': 
u'floating', 'address': u'10.0.0.101'})], 'address': u'192.168.0.2'})], 
'version': 4, 'meta': {u'dhcp_server': u'192.168.0.3'}, 'dns': [], 'routes': 
[], 'cidr': u'192.168.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 
u'gateway', 'address': u'192.168.0.1'})})], 'meta': {u'injected': False, 
u'tenant_id': u'5d56667c799c46ef81b87455445af457', u'should_create_bridge': 
True}, 'id': u'f29d33d2-7c70-456a-96b0-03a59fe0b40f', 'label': u'admin_net'}), 
'devname': u'tap0780a643-9a', 'qbh_params': None, 'meta': {}, 'details': 
{u'port_filter': True}, 'address': u'fa:16:3e:dc:23:66', 'active': True, 
'type': u'bridge', 'id': u'0780a643
 -9ad4-4388-a51d-3456a1e88ae6', 'qbg_params': None}) plug 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py:592
  2014-05-28 16:21:12.568 2724 DEBUG nova.virt.libvirt.vif [-] [instance: 
ac57aab0-1864-4335-aa4a-bbfcc75a9624] Ensuring bridge brqf29d33d2-7c 
plug_bridge /usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py:408
  2014-05-28 16:21:12.568 2724 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore "lock_bridge" lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:168
  2014-05-28 16:21:12.569 2724 DEBUG nova.openstack.common.lockutils [-] 
Attempting to grab file lock "lock_bridge" lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:178
  2014-05-28 16:21:12.569 2724 DEBUG nova.openstack.common.lockutils [-] Got 
file lock "lock_bridge" at /var/lib/nova/tmp/nova-lock_bridge lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
  2014-05-28 16:21:12.569 2724 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock "ensure_bridge" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:248
  2014-05-28 16:21:12.570 2724 DEBUG nova.openstack.common.lockutils [-] 
Released file lock "lock_bridge" at /var/lib/nova/tmp/nova-lock_bridge lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:210
  2014-05-28 16:21:12.570 2724 DEBUG nova.openstack.common.lockutils [-] 
Semaphore / lock released "ensure_bridge" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:252
  2014-05-28 16:21:12.570 2724 DEBUG nova.compute.manager [-] [instance: 
ac57aab0-1864-4335-aa4a-bbfcc75a9624] Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:1043
  2014-05-28 16:21:12.575 2724 DEBUG nova.compute.manager [-] [instance: 
ac57aab0-1864-4335-aa4a-bbfcc75a9624] Current state is 4, state in DB is 1. 
_init_instance /usr/lib/python2.6/site-packages/nova/compute/manager.py:920
  2014-05-28 16:21:12.575 2724 DEBUG nova.compute.manager [-] [instance: 
8047e688-d189-4d35-a9c8-634f34cdda86] Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:1043
  2014-05-28 16:21:12.579 2724 DEBUG nova.compute.manager [-] [instance: 
8047e688-d189-4d35-a9c8-634f34cdda86] Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:1043
  2014-05-28 16:21:12.584 2724 DEBUG nova.virt.libvirt.vif [-] 
vif_type=binding_failed instance= vif=VIF({'ovs_interfaceid': None, 'network': Network({'bridge': 
None, 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [IP({'meta': {}, 'version': 4, 'type': u'floating', 
'address': u'10.0.0.112'})], 'address': u'172.16.0.180'})], 'version': 4, 
'meta': {u'dhcp_server': u'172.16.0.3'}, 'dns': [], 'routes': [], 'cidr': 
u'172.16.0.0/24', 'gateway': IP({'meta': {}, '

[Yahoo-eng-team] [Bug 1394052] Re: Fix exception handling in _get_host_metrics()

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394052

Title:
  Fix exception handling in _get_host_metrics()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  In resource_tracker.py, the exception path of _get_host_metrics()
  contains a wrong variable name.

  for monitor in self.monitors:
  try:
  metrics += monitor.get_metrics(nodename=nodename)
  except Exception:
  LOG.warn(_("Cannot get the metrics from %s."), monitors)   
<-- Need to change 'monitors' to 'monitor'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377647] Re: Network: neutron allocate network creates neutron client a number of times

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377647

Title:
  Network: neutron allocate network creates neutron client a number of
  times

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The method allocate_for_instance creates a neutron client a number of
  times. This only needs to be done twice. The first for the tenant and
  the second for the admin (this is in the event that the port bindings
  need to be configured)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1377647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389933] Re: cell create api failed with string number

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389933

Title:
  cell create api failed with string number

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When request as below:
  curl -i 
'http://cloudcontroller:8774/v2/04e2ab93c10a4c2dbef1c648d04567cc/os-cells' -X 
POST -H "Accept: application/json" -H "Content-Type: application/json" -H 
"User-Agent: python-novaclient" -H "X-Auth-Project-Id: admin" -H "X-Auth-Token: 
016d26c590ab4a0b91de718d01d7a649" -d '{"cell": {"name": "abc", "rpc_port": 
"123"}}'

  Get error as below:
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi Traceback (most recent 
call last):
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 950, in _process_stack
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 1034, in dispatch
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/cells.py", line 360, in 
create
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi 
self._normalize_cell(cell)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/cells.py", line 340, in 
_normalize_cell
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi 
cell['transport_url'] = str(transport_url)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 318, 
in __str__
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi netloc += ':%d' % 
port
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi TypeError: %d format: a 
number is required, not unicode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389850] Re: libvirt: Custom disk_bus setting is being lost when migration is reverted

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389850

Title:
  libvirt: Custom disk_bus setting is being lost when migration is
  reverted

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When migration is being reverted on a host, the default disk_bus
  setting are lost .

  finish_revert_migration() should use image_meta, if it exist, when 
constructing
  the disk_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378088] Re: nova/tests/virt/vmwareapi/test_vmops:test_spawn_mask_block_device_info_password doesn't correctly assert password is scrubbed

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378088

Title:
  
nova/tests/virt/vmwareapi/test_vmops:test_spawn_mask_block_device_info_password
  doesn't correctly assert password is scrubbed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  While looking at some new code, I noticed this test has a bug.

  It's easy to reproduce, just remove the call to logging.mask_password
  (but keep the LOG.debug) in nova/virt/vmwareapi/vmops.py:spawn. The
  test will still pass.

  The reason is because failed assertions raise exceptions that are a
  subclass of Exception.

  The test catches anything derived from Exception and silently ignores
  them, including any failed assertions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397381] Re: numa cell ids need to be normalized before creating xml

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397381

Title:
  numa cell ids need to be normalized before creating  xml

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When creating instance with numa topology, if that instance is placed
  on a host numa node different then 0 , invalid libvirt.xml will be
  generated. it's because instance cell id's are used to store host id
  assignment. Instance cell id's should be normalized before generating
  libvirt.xml

  
  2014-11-28 16:35:55.195 ERROR nova.virt.libvirt.driver [-] Error defining a 
domain with XML: 
b90dbaea-ed6f-49fa-b3bf-75137005c621
instance-0005
2097152

  
  

4

  http://openstack.org/xmlns/libvirt/nova/1.0";>

vm1
2014-11-28 16:35:55

  2048
  0
  0
  0
  4


  admin
  demo


  


  
OpenStack Foundation
OpenStack Nova
2015.1.0
10881668-12bf-49c2-9efb-28bb8d278929
b90dbaea-ed6f-49fa-b3bf-75137005c621
  


  hvm
  
  


  
  


  
  
  
  


  
  
  


  
  

  


  



  
  



  
  


  


  

  
  

  
  
  
  

  
  

  

  

  2014-11-28 16:35:55.195 ERROR nova.compute.manager [-] [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] Instance failed to spawn
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] Traceback (most recent call last):
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/shared/stack/nova/nova/compute/manager.py", line 2246, in _build_resources
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] yield resources
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/shared/stack/nova/nova/compute/manager.py", line 2116, in 
_build_and_run_instance
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] instance_type=instance_type)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/shared/stack/nova/nova/virt/libvirt/driver.py", line 2641, in spawn
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] block_device_info, 
disk_info=disk_info)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/shared/stack/nova/nova/virt/libvirt/driver.py", line 4490, in 
_create_domain_and_network
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] power_on=power_on)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/shared/stack/nova/nova/virt/libvirt/driver.py", line 4423, in _create_domain
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] LOG.error(err)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/usr/lib/python2.7/site-packages/oslo/utils/excutils.py", line 82, in __exit__
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] six.reraise(self.type_, self.value, 
self.tb)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/shared/stack/nova/nova/virt/libvirt/driver.py", line 4407, in _create_domain
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] domain = self._conn.defineXML(xml)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005c621]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
  2014-11-28 16:35:55.195 TRACE nova.compute.manager [instance: 
b90dbaea-ed6f-49fa-b3bf-75137005

[Yahoo-eng-team] [Bug 1388386] Re: libvirt: boot instance with utf-8 name results in UnicodeDecodeError

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388386

Title:
  libvirt: boot instance with utf-8 name results in UnicodeDecodeError

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  With the libvirt driver and Juno 2014.2 code, try to boot a server via
  Horizon with name "ABC一丁七ÇàâアイウДфэبتثअइउ€¥噂ソ十豹竹敷" results in:

  http://paste.openstack.org/show/128060/

  This is new in Juno but was a latent issue since Icehouse, the Juno
  change was:

  
https://github.com/openstack/nova/commit/60c90f73261efb8c73ecc02152307c81265cab13

  The err variable is an i18n Message object and when we try to put the
  domain.XMLDesc(0) into the unicode _LE message object string it blows
  up in oslo.i18n because the encoding doesn't match.

  The fix is to wrap domain.XMLDesc(0) in
  oslo.utils.encodeutils.safe_decode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390906] Re: VMware: reading a file fails when using IPv6

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390906

Title:
  VMware: reading a file fails when using IPv6

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Need to protect the host name with '[' and ']' before  we create a
  http/https connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392426] Re: objects.BandwidthUsage.create does not honor the update_cells argument

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392426

Title:
  objects.BandwidthUsage.create does not honor the update_cells argument

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  objects.BandwidthUsage.create() has replaced
  conductor_api.bw_usage_update() which took an argument 'update_cells'.
  This changes the behavior of _poll_bandwidth_usage with cells so it
  should be added to create().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378389] Re: os-interface:show will not handle PortNotFoundClient exception from neutron

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378389

Title:
  os-interface:show will not handle PortNotFoundClient exception from
  neutron

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The os-interface:show method in the v2/v3 compute API is catching a
  NotFound(NovaException):

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py?id=2014.2.rc1#n67

  But when using the neutronv2 API, if you get a port not found it's
  going to raise up a PortNotFoundClient(NeutronClientException), which
  won't be handled by the NotFound(NovaException) in the compute API
  since it's not the same type of exception.

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/neutronv2/api.py?id=2014.2.rc1#n584

  This bug has two parts:

  1. The neutronv2 API show_port method needs to return nova exceptions,
  not neutron client exceptions.

  2. The os-interfaces:show v2/v3 APIs need to handle the exceptions
  (404 is handled, but neutron can also raise Forbidden/Unauthorized
  which the compute API isn't handling).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381414] Re: Unit test failure "AssertionError: Expected to be called once. Called 2 times." in test_get_port_vnic_info_3

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381414

Title:
  Unit test failure "AssertionError: Expected to be called once. Called
  2 times." in test_get_port_vnic_info_3

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This looks to be due to tests test_get_port_vnic_info_2 and 3 sharing
  some code and is easily reproduced by running these two tests alone
  with no concurrency.

  ./run_tests.sh --concurrency 1 test_get_port_vnic_info_2
  test_get_port_vnic_info_3

  The above always results in:

  Traceback (most recent call last):
File "/home/hans/nova/nova/tests/network/test_neutronv2.py", line 2615, in 
test_get_port_vnic_info_3
  self._test_get_port_vnic_info()
File "/home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py", 
line 1201, in patched
  return func(*args, **keywargs)
File "/home/hans/nova/nova/tests/network/test_neutronv2.py", line 2607, in 
_test_get_port_vnic_info
  fields=['binding:vnic_type', 'network_id'])
File "/home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py", 
line 845, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected to be called once. Called 2 times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369605] Re: nova.db.sqlalchemy.api.quota_reserve is not very unit test-able

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369605

Title:
  nova.db.sqlalchemy.api.quota_reserve is not very unit test-able

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The method is huge and has lots of conditional blocks.

  We should break the big conditional blocks out into private methods so
  the top-level quota_reserve logic can be unit tested on it's own.

  This became an issue in this review for a separate bug fix in the
  logic:

  https://review.openstack.org/#/c/121259/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369696] Re: quota_reserve headroom might be wrong if project_quotas != user_quotas

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369696

Title:
  quota_reserve headroom might be wrong if project_quotas != user_quotas

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Looking at this code:

  
https://github.com/openstack/nova/blob/2014.2.b3/nova/db/sqlalchemy/api.py#L3343

  Notice the headroom variable is created based on usages and
  user_quotas:

  headroom = dict((res, user_quotas[res] -
   (usages[res]['in_use'] + 
usages[res]['reserved']))
  for res in user_quotas.keys())

  but the usages variable is based on whether or not project_quotas ==
  user_quotas:

  if project_quotas == user_quotas:
  usages = project_usages
  else:
  usages = user_usages

  So it appears that headroom could be incorrect if project_quotas !=
  user_quotas.

  Looking at what uses headroom, the compute API uses this in the
  instance create and resize flows.  For resize it's just using headroom
  to plug into an error message for the TooManyInstances exception.

  In the create flow (_check_num_instances_quota) it's used for a bit
  more advanced logic with recursion.

  We should probably just remove the headroom calculation from
  quota_reserve and let the caller figure it out and what needs to be
  done with it.  It's also odd that this is happening in the DB API
  because it's dealing with instance quotas but maybe I'm not doing
  anything with instance quotas, maybe I'm doing things with security
  group or fixed IP quotas - so this code seems to be in the wrong
  place.  Maybe it's just conveniently placed here given the other data
  already in scope from the database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370019] Re: unshelve and resize instance unnecessarily logs ‘image not found’ error/warning messages

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370019

Title:
  unshelve and resize instance unnecessarily logs ‘image not found’
  error/warning messages

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  unshelve and resize instance (created by bootable volume)
  unnecessarily logs ‘image not found’ error/warning messages

  In both the cases, it logs following misleading error/warning messages
  in the compute.log when image_id_or_uri is passed as None to
  nova/compute/utils->get_image_metadata method.

  14-09-05 03:41:54.834 ERROR glanceclient.common.http 
[req-80c9db2e-cc3d-481c-a5a3-babd917a3698 admin admin] Request returned failure 
status 404.
  14-09-05 03:41:54.834 WARNING nova.compute.utils 
[req-80c9db2e-cc3d-481c-a5a3-babd917a3698 admin admin]  [instance: 
d5b137ab-19a1-484a-a828-6a229ec66950] Can't access image : Image  could not be 
found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370999] Re: xenapi: windows agent unreliable due to reboots

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370999

Title:
  xenapi: windows agent unreliable due to reboots

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The windows nova-agent now can trigger a gust reboot during
  resetnetwork, so the hostname is correctly updated.

  Also there was always a reboot during the first stages of polling for
  the agent version that can cause the need to wait for a call to
  timeout, rather than detecting a reboot.

  Either way, we need to take more care to detect reboots while talking
  to the agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370613] Re: InvalidHypervisorVirtType: Hypervisor virtualization type 'powervm' is not recognised

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370613

Title:
  InvalidHypervisorVirtType: Hypervisor virtualization type 'powervm' is
  not recognised

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in IBM PowerVC Driver for OpenStack:
  In Progress

Bug description:
  With these changes we have a list of known hypervisor types for
  scheduling:

  https://review.openstack.org/#/c/109591/
  https://review.openstack.org/#/c/109592/

  There is a powervc driver in stackforge (basically the replacement for
  the old powervm driver) which has a hypervisor type of 'powervm' and
  trying to boot anything against that fails in scheduling since the
  type is unknown.

  http://git.openstack.org/cgit/stackforge/powervc-driver/

  Seems like adding powervm to the list shouldn't be an issue given
  other things in that list like bhyve and phyp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370348] Re: Using macvtap vnic_type is not working with vif_type=hw_veb

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370348

Title:
  Using macvtap vnic_type is not working with vif_type=hw_veb

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When trying to boot an instance with a port using vnic_type=macvtap
  and vif_type=hw_veb I get this error in Compute log:

  TRACE nova.compute.manager  mlibvirtError: unsupported configuration:
  an interface of type 'direct' is requesting a vlan tag, but that is
  not supported for this type of connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370184] Re: Ironic driver states file out-of-date

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370184

Title:
  Ironic driver states file out-of-date

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The current ironic states file, nova/virt/ironic/ironic_states.py, is
  out-of-date, and was recently updated in ironic with this change:

  https://review.openstack.org/118467

  Ideally, we should keep these in sync to prevent confusion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370590] Re: Libvirt _create_domain_and_network calls missing disk_info

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370590

Title:
  Libvirt _create_domain_and_network calls missing disk_info

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When boot from block/volume was started for libvirt-lxc, a check was
  added to _create_domain_setup_lxc that uses disk_info to determine
  whether or not the instance was booted from block. While the spawn
  call provides disk_info via a kwarg to _create_domain_and_network,
  many other operations leave that information off. These calls now need
  to provide disk_info to _create_domain_and_network so that it is
  available to determine whether or not the instance was booted from
  block/volume. Without that data, the check will erroneously determine
  that the instance was booted from a volume.

  Steps to reproduce:
  1) Create devstack with local.conf: 
https://gist.github.com/ramielrowe/520b0b86a5adf385b45d
  2) Boot instance from image
  3) Stop the instance
  4) Start the instance
  5) Observe exception in nova-compute logs: 
https://gist.github.com/ramielrowe/5cc2cb372fd019ee8331

  In the stack trace you can see Nova has attempted to set up the
  instance as if it was booted from volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373239] Re: libvirt: custom video RAM setting in MB but libvirt xml is in blocks of 1024 bytes

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373239

Title:
  libvirt: custom video RAM setting in MB but libvirt xml is in blocks
  of 1024 bytes

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When using this feature
  https://review.openstack.org/#/c/63472/
  libvirt: Enable custom video RAM setting

  I am setting image meta of image

  metadata hw_video_model  vga
  metadata hw_video_ram   64 

  As stated "hw_video_ram can be provided via the image properties in Glance.
  The value should be provided in MB."
  Value should be in MB

  also setting flavor-key to
  extra_specs| {"hw_video:ram_max_mb": "65"} 
  Also in MB

  Booting an instance the xml of the instance ends up like:

   


  

  But  From
  http://libvirt.org/formatdomain.html#elementsVideo
  "You can also provide the amount of video memory in kibibytes (blocks of 1024 
bytes) using vram and the number of screen with heads."

  So this will not give med video ram of 64MB byt 64 bytes.

  So if  hw_video_ram  should be in MB conversion to blocks of 1024
  bytes should be done somewhere before output the xml.

  Either in 
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py
  Or in 
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/config.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372815] Re: Hyper-V tries to logout from iSCSI targets in use

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372815

Title:
  Hyper-V tries to logout from iSCSI targets in use

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When a volume is detached from a VM on Hyper-V, the Nova driver tries
  to disconnect from the iSCSI target, even if it's in use.

  
https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeops.py#L194

  This make sense when a volume (LUN) is associated with only one iscsi
  target, but this isn't always the case, you can have only one iSCSI
  target that exports more than one LUN to the hypervisor.

  We could provide a general solution that will only disconnect from the
  iSCSI target when there aren't more disks exposed to the hypervisor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373238] Re: load extension lead to error calling 'volumes': 'NoneType' object has no attribute 'controller' for v3 api

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373238

Title:
  load extension lead to error calling 'volumes': 'NoneType' object has
  no attribute 'controller' for v3 api

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Add a new plugin lead to following error, the root cause is servers was 
loaded after volumes
  so the inherits.controller is None

  if resource.inherits:
  inherits = self.resources.get(resource.inherits)
  if not resource.controller:
  resource.controller = inherits.controller

  
  ERROR [stevedore.extension] error calling 'volumes': 'NoneType' object has no 
attribute 'controller'
  ERROR [stevedore.extension] 'NoneType' object has no attribute 'controller'
  Traceback (most recent call last):
File 
"/home/jichen/git/nova/.venv/local/lib/python2.7/site-packages/stevedore/extension.py",
 line 248, in _invoke_one_plugin
  response_callback(func(e, *args, **kwds))
File "/home/jichen/git/nova/nova/api/openstack/__init__.py", line 376, in 
_register_resources
  resource.controller = inherits.controller
  AttributeError: 'NoneType' object has no attribute 'controller'
  DEBUG [nova.api.openstack] Running _register_resources on 


To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372672] Re: VMware: 'NoneType' object has no attribute 'keys' in the driver

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372672

Title:
  VMware: 'NoneType' object has no attribute 'keys' in the driver

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  There are a couple of places in the driver where we use the keys()
  method without checking for None.

  I have seen several times the following exception:
  2014-09-22 11:45:07.312 ERROR nova.openstack.common.periodic_task [-] Error 
during ComputeManager.update_available_resource: 'NoneType' object has no 
attribute 'keys'
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task Traceback 
(most recent call last):
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/openstack/common/periodic_task.py", line 198, in 
run_periodic_tasks
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/compute/manager.py", line 5909, in 
update_available_resource
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task 
nodenames = set(self.driver.get_available_nodes())
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 426, in 
get_available_nodes
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task 
self._update_resources()
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 306, in _update_resources
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task 
added_nodes = set(self.dict_mors.keys()) - set(self._resource_keys)
  2014-09-22 11:45:07.312 TRACE nova.openstack.common.periodic_task 
AttributeError: 'NoneType' object has no attribute 'keys'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373761] Re: Better error message for attach/detach interface failed

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373761

Title:
  Better error message for attach/detach interface failed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Some time we can see attach/detach interface failed, but we didn't log
  the detail info, that's hard to debug.

  for example:
  
http://logs.openstack.org/02/111802/1/gate/gate-tempest-dsvm-neutron/eff16a6/logs/screen-n-cpu.txt.gz?#_2014-09-24_07_54_12_206

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374260] Re: HTTPBadRequest is raised when creating floating_ip_bulk which already exists

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374260

Title:
  HTTPBadRequest is raised when creating floating_ip_bulk which already
  exists

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When creating a floating_ip_bulk which already exists,
  HTTPBadRequest(400) is returned, which should be changed to
  HTTPConflict(409).

  $ nova  floating-ip-bulk-create 192.0.20.0/28 --pool private
  ERROR (BadRequest): Floating ip 192.0.20.1 already exists. (HTTP 400) 
(Request-ID: req-cf6ba91a-8a5f-4772-91b5-a159d5c06719)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378903] Re: Xen snapshot uploads can fail without retry under retryable circumstances

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378903

Title:
  Xen snapshot uploads can fail without retry under retryable
  circumstances

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If a glance server is completely down, the xen server taking a
  snapshot will fail and report back as a "non-retryable" exception.
  This is not correct and the compute node should really go to the next
  server in the list and retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376933] Re: _poll_unconfirmed_resize timing window causes instance to stay in verify_resize state forever

2014-12-18 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376933

Title:
  _poll_unconfirmed_resize timing window causes instance to stay in
  verify_resize state forever

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  If the _poll_unconfirmed_resizes periodic task runs in
  nova/compute/manager.py:ComputeManager._finish_resize() after the
  migration record has been updated in the database but before the
  instances has been updated.

  2014-09-30 16:15:00.897 112868 INFO nova.compute.manager [-] Automatically 
confirming migration 207 for instance 799f9246-bc05-4ae8-8737-4f358240f586
  2014-09-30 16:15:01.109 112868 WARNING nova.compute.manager [-] [instance: 
799f9246-bc05-4ae8-8737-4f358240f586] Setting migration 207 to error: In states 
stopped/resize_finish, not RESIZED/None

  This causes _poll_unconfirmed_resizes to see that the VM task_state is
  still 'resize_finish' instead of None, and set the migration record to
  error state. Which in turn causes the VM to be stuck in resizing
  forever.

  Two fixes have been proposed for this issue so far but were reverted
  because they caused other race conditions. See the following two bugs
  for more details.

  https://bugs.launchpad.net/nova/+bug/1321298
  https://bugs.launchpad.net/nova/+bug/1326778

  This timing issue still exists in Juno today in an environment with
  periodic tasks set to run once every 60 seconds and with a
  resize_confirm_window of 1 second.

  Would a possible solution for this be to change the code in
  _poll_unconfirmed_resizes() to ignore any VMs with a task state of
  'resize_finish' instead of setting the corresponding migration record
  to error? This is the task_state it should have right before changed
  to None in finish_resize(). Then next time _poll_unconfirmed_resizes()
  is called, the migration record will still be fetched and the VM will
  be checked again and in the updated vm_state/task_state.

  add the following in _poll_unconfirmed_resizes():

   # This removes a race condition
  if task_state == 'resize_finish':
  continue

  prior to: 
  elif vm_state != vm_states.RESIZED or task_state is not None:
  reason = (_("In states %(vm_state)s/%(task_state)s, not "
 "RESIZED/None") %
{'vm_state': vm_state,
 'task_state': task_state})
  _set_migration_to_error(migration, reason,
  instance=instance)
  continue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   >