[Yahoo-eng-team] [Bug 1404617] [NEW] glance with rbd store fails to delete an image

2014-12-21 Thread Yogev Rabl
Public bug reported:

Description of problem:
The deletion of an image fails when Glance is configured to work with RBD 
store, with the configuration settings that are described in this manual: 
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#juno

It seems like the glance client is stuck.


The CLI debug show:

# glance --debug image-delete 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
curl -i -X HEAD -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' 
-H 'X-Auth-Token: {SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360' 
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8

HTTP/1.1 200 OK
content-length: 0
x-image-meta-id: 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
x-image-meta-deleted: False
x-image-meta-container_format: bare
x-image-meta-checksum: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-protected: False
x-image-meta-min_disk: 0
x-image-meta-min_ram: 0
x-image-meta-created_at: 2014-12-21T07:48:22
x-image-meta-size: 41126400
x-image-meta-status: active
etag: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-is_public: True
date: Sun, 21 Dec 2014 07:48:50 GMT
x-image-meta-owner: fb7cd4084c6d4262a94d406f8418d155
x-image-meta-updated_at: 2014-12-21T07:48:29
content-type: text/html; charset=UTF-8
x-openstack-request-id: req-c6975244-6e0d-4b69-8a95-d3703c226a37
x-image-meta-disk_format: raw
x-image-meta-name: cirros-to-delete

curl -i -X HEAD -H 'User-Agent: python-glanceclient' -H 'Content-Type:
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H
'Accept: */*' -H 'X-Auth-Token:
{SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360'
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8

HTTP/1.1 200 OK
content-length: 0
x-image-meta-id: 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
x-image-meta-deleted: False
x-image-meta-container_format: bare
x-image-meta-checksum: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-protected: False
x-image-meta-min_disk: 0
x-image-meta-min_ram: 0
x-image-meta-created_at: 2014-12-21T07:48:22
x-image-meta-size: 41126400
x-image-meta-status: active
etag: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-is_public: True
date: Sun, 21 Dec 2014 07:48:50 GMT
x-image-meta-owner: fb7cd4084c6d4262a94d406f8418d155
x-image-meta-updated_at: 2014-12-21T07:48:29
content-type: text/html; charset=UTF-8
x-openstack-request-id: req-b4808dc5-1aa1-4df0-b70f-4604355b5fba
x-image-meta-disk_format: raw
x-image-meta-name: cirros-to-delete

curl -i -X DELETE -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' 
-H 'X-Auth-Token: {SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360' 
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Create a new image
2. Delete the image


Actual results:
The image deletion fails. The data is not been deleted from the Ceph storage

Expected results:
The image should be deleted

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: rbd

** Attachment added: "glance's log"
   
https://bugs.launchpad.net/bugs/1404617/+attachment/4285072/+files/glance-image-delete-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1404617

Title:
  glance with rbd store fails to delete an image

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  The deletion of an image fails when Glance is configured to work with RBD 
store, with the configuration settings that are described in this manual: 
  http://docs.ceph.com/docs/master/rbd/rbd-openstack/#juno

  It seems like the glance client is stuck.

  
  The CLI debug show:

  # glance --debug image-delete 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
  curl -i -X HEAD -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' 
-H 'X-Auth-Token: {SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360' 
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8

  HTTP/1.1 200 OK
  content-length: 0
  x-image-meta-id: 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
  x-image-meta-deleted: False
  x-image-meta-container_format: bare
  x-image-meta-checksum: 78e6077fcda0c474d42e2811c51e791f
  x-image-meta-protected: False
  x-image-meta-min_disk: 0
  x-image-meta-min_ram: 0
  x-image-meta-created_at: 2014-12-21T07:48:22
  x-image-meta-size: 41126400
  x-image-meta-status: active
  etag: 78e6077fcda0c474d42e2811c51e791f
  x-image-meta-is_public: True
  date: Sun, 21 Dec 2014 07:48:50 GMT
  x-image-meta-owner: fb7cd4084c6d4262a94d406f8418d155
  x-image-meta-updated_at: 2014-12-21T07:48:29
  content-type: text/html; charset=UTF-8
  x-openstack-request-id: req-c6975244-6e0d-4b69-8a9

[Yahoo-eng-team] [Bug 1404662] [NEW] L3 agent driver singletons pointing at old agent during testing

2014-12-21 Thread Assaf Muller
Public bug reported:

L3 agent drivers are singletons. They're created once, and hold
self.l3_agent. During testing, the agent is tossed away and re-built,
but the drivers singletons are pointing at the old agent, and its old
configuration. Single each agent has its own state_path (And dependent
metadata_proxy_socket and such), the drivers are looking at old, non-
relevant configuration.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404662

Title:
  L3 agent driver singletons pointing at old agent during testing

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  L3 agent drivers are singletons. They're created once, and hold
  self.l3_agent. During testing, the agent is tossed away and re-built,
  but the drivers singletons are pointing at the old agent, and its old
  configuration. Single each agent has its own state_path (And dependent
  metadata_proxy_socket and such), the drivers are looking at old, non-
  relevant configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404688] [NEW] delete subnet from admin tenant fails for PLUMgrid Plugin

2014-12-21 Thread Fawad Khaliq
Public bug reported:

If a non-admin tenant creates a network and a subnet and delete
operation is performed by the admin tenant, PLUMgrid plugin ends up
passing incorrect tenant_id to the backend and this fails the calls.

Fix would be to get correct tenant UUID in delete subnet operation.

** Affects: neutron
 Importance: Undecided
 Assignee: Fawad Khaliq (fawadkhaliq)
 Status: New


** Tags: icehouse-backport-potential juno-backport-potential

** Changed in: neutron
 Assignee: (unassigned) => Fawad Khaliq (fawadkhaliq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404688

Title:
  delete subnet from admin tenant fails for PLUMgrid Plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If a non-admin tenant creates a network and a subnet and delete
  operation is performed by the admin tenant, PLUMgrid plugin ends up
  passing incorrect tenant_id to the backend and this fails the calls.

  Fix would be to get correct tenant UUID in delete subnet operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403823] Re: tests in OpenDaylight CI failing for past 6 days

2014-12-21 Thread Eugene Nikanorov
I don't think this bug belongs to neutron project itself

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403823

Title:
  tests in OpenDaylight CI failing for past 6 days

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Last successful build on OpenDaylight CI(
  https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6
  days back. After that this   OpenDaylight CI Jenkins job is failing
  for all the patches.

  Please modify this job to non-voting category.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403625] Re: devstack defaults to VXLAN even though ENABLE_TENANT_TUNNELS is False in local.conf

2014-12-21 Thread Eugene Nikanorov
** Project changed: neutron => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403625

Title:
  devstack defaults to VXLAN even though ENABLE_TENANT_TUNNELS is False
  in local.conf

Status in devstack - openstack dev environments:
  New

Bug description:
  Branch info:

  slogan@slogan-virtual-machine:~/devstack$ git branch
  * master
  slogan@slogan-virtual-machine:~/devstack$ git log | head
  commit 062e8f14874ab254aa756aabb4f50db77431
  Merge: 7f80280 7bb9a73
  Author: Jenkins 
  Date:   Tue Dec 16 22:02:41 2014 +

  Merge "Adds missing rabbit_userid to trove configs"

  commit 7f8028069883b8214bd2aae56f78514a4fbe
  Merge: affcf87 dc31f76
  Author: Jenkins 

  local.conf:

  [[local|localrc]]
  disable_service n-net
  enable_service q-l3
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-meta
  enable_service neutron
  ADMIN_PASSWORD=password
  DATABASE_PASSWORD=$ADMIN_PASSWORD
  RABBIT_PASSWORD=$ADMIN_PASSWORD
  SERVICE_PASSWORD=$ADMIN_PASSWORD
  SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
  FIXED_RANGE=10.4.128.0/20
  #FLOATING_RANGE=192.168.1.20/30
  #HOST_IP=localhost
  HOST_IP=192.168.1.220
  PUBLIC_INTERFACE=eth0
  FLAT_INTERFACE=br-int
  FLAT_NETWORK_BRIDGE=br-eth0
  NETWORK_GATEWAY=10.4.128.1
  FIXED_NETWORK_SIZE=4096
  SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
  Q_PLUGIN=ml2
  OFFLINE=True
  ACTIVE_TIMEOUT=120
  ASSOCIATE_TIMEOUT=60
  BOOT_TIMEOUT=120
  SERVICE_TIMEOUT=120
  EXTRA_OPTS=(metadata_host=$HOST_IP)

  # Allow tenants to create vlans

  ENABLE_TENANT_VLANS=True
  ENABLE_TENANT_TUNNELS=False
  ML2_VLAN_RANGES=physnet1:1100:2999

  # these are needed fo VLANs for tenants to connect to physical switch

  PHYSICAL_NETWORK=default
  OVS_PHYSICAL_BRIDGE=br-int

  Q_DHCP_EXTRA_DEFAULT_OPTS=(enable_metadata_network=True
  enable_isolated_metadata=True)

  Notice I don't have Q_ML2_TENANT_NETWORK_TYPE setm but I am saying I
  want VLANS and I don't want tunnels (and I haven't defined anything
  else related to tunnels, e.g., VNI ranges).

  When I run ./stack.sh with the above, I noticed that br-tun is
  created:

  slogan@slogan-virtual-machine:~/devstack$ sudo ovs-vsctl show
  2d7ac7cc-4358-41e7-afd4-3c5a0081d79f
  Bridge br-ex
  Port br-ex
  Interface br-ex
  type: internal
  Port "qg-db338515-8c"
  Interface "qg-db338515-8c"
  type: internal
  Bridge br-tun
  Port br-tun
  Interface br-tun
  type: internal
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Bridge br-int
  ...

  Also, in ml2_conf.ini:

  [ml2]
  tenant_network_types = vxlan
  type_drivers = local,flat,vlan,gre,vxlan
  mechanism_drivers = openvswitch,linuxbridge
  ...

  The code in question is in devstack/lib/neutron_plugins/ml2:

  Q_ML2_TENANT_NETWORK_TYPE=${Q_ML2_TENANT_NETWORK_TYPE:-"vxlan"}
  # This has to be set here since the agent will set this in the config file
  if [[ "$Q_ML2_TENANT_NETWORK_TYPE" == "gre" || "$Q_ML2_TENANT_NETWORK_TYPE" 
== "vxlan" ]]; then
  Q_TUNNEL_TYPES=$Q_ML2_TENANT_NETWORK_TYPE
  elif [[ "$ENABLE_TENANT_TUNNELS" == "True" ]]; then
  Q_TUNNEL_TYPES=gre
  fi

  The above code sets the tenant network type to vxlan if not specified
  (as a default). I think the code should account for the
  ENABLE_TENANT_TUNNELS and ENABLE_TENANT_VLANS when defining the
  default.

  Notice that the wiki has a devstack sample that led me down this path,
  I'd like to see the code match this wiki by fixing this bug (I think
  the wiki is fine, it's the script that needs fixing).

  Configure devstack for ML2 with VLANs
  An example control and compute node localrc file is shown here for 
configuring ML2 to run with VLANs with devstack. This is equivalent to running 
the OVS or LinuxBridge plugins in VLAN mode.

  Add the following to your control node localrc:
  Q_PLUGIN=ml2
  ENABLE_TENANT_VLANS=True
  ML2_VLAN_RANGES=mynetwork:100:200
  To set special VLAN parameters for the VLAN TypeDriver, the following 
variable in localrc can be used. This is a space separate list of assignment 
values:
  Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=600:700)

  (the above is from https://wiki.openstack.org/wiki/Neutron/ML2

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1403625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403291] Re: test_server_connectivity_pause_unpause fails with "AssertionError: False is not true : Timed out waiting for 172.24.4.64 to become reachable"

2014-12-21 Thread Eugene Nikanorov
** Also affects: tempest
   Importance: Undecided
   Status: New

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403291

Title:
  test_server_connectivity_pause_unpause fails with "AssertionError:
  False is not true : Timed out waiting for 172.24.4.64 to become
  reachable"

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  http://logs.openstack.org/46/142246/1/check//check-tempest-dsvm-
  neutron-full/ff04c3e/console.html#_2014-12-16_23_45_56_966

  message:"check_public_network_connectivity" AND
  message:"AssertionError: False is not true : Timed out waiting for"
  AND message:"to become reachable" AND tags:"tempest.txt"

  420 hits in 7 days, check and gate, all failures.  Seems like this is
  probably a known issue already so could be a duplicate of another bug,
  but given elastic-recheck didn't comment on my patch when this failed
  I'm reporting a new bug and a new e-r query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiY2hlY2tfcHVibGljX25ldHdvcmtfY29ubmVjdGl2aXR5XCIgQU5EIG1lc3NhZ2U6XCJBc3NlcnRpb25FcnJvcjogRmFsc2UgaXMgbm90IHRydWUgOiBUaW1lZCBvdXQgd2FpdGluZyBmb3JcIiBBTkQgbWVzc2FnZTpcInRvIGJlY29tZSByZWFjaGFibGVcIiBBTkQgdGFnczpcInRlbXBlc3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg3ODcwOTM1OTIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403424] Re: Fixed ip info shown for port even when dhcp is disabled

2014-12-21 Thread Eugene Nikanorov
What you get through the API is just a reflection of DB state, in which a port 
has IP address regardless of subnet settings.
I don't think output should be affected by the state of the backend.

So IMO, we don't need to fix this.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403424

Title:
  Fixed ip info shown for port even when dhcp is disabled

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  As a user it is very confusing (especially in Horizon dashboard)
  having the IP address displayed even when the DHCP is disabled for the
  subnet the port belongs to: the user would expect that the IP address
  shown is actually assigned to the instance, but this is not the case,
  since the DHCP is disabled.

  I asked in the ML about this issue
  (http://lists.openstack.org/pipermail/openstack-
  dev/2014-December/053069.html) and I understood that neutron needs to
  reserve an IP address for a port even in the case it is not assigned,
  anyway I think this info should not be displayed or should be
  differently specified.

  In a first moment I thought about raising the bug against Horizon, but
  I feel the correct place to fix this is in neutron. Before to assign
  this bug to myself I would like to get some feedback by other
  developers, my idea for a possible solution is to add a boolean
  element "assigned" to "fixed_ip" dict with value False when in the
  subnet identified by "subnet_id"  DHCP is disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404743] [NEW] test_load_balancer_basic fails with ssh timeout

2014-12-21 Thread Angus Salkeld
Public bug reported:

http://logs.openstack.org/01/141001/4/gate/gate-tempest-dsvm-neutron-
full/0fcd5ec/console.html.gz

2014-12-19 14:01:31.371 | SSHTimeout: Connection to the 172.24.4.69
via SSH timed out.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404743

Title:
  test_load_balancer_basic fails with ssh timeout

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/01/141001/4/gate/gate-tempest-dsvm-neutron-
  full/0fcd5ec/console.html.gz

  2014-12-19 14:01:31.371 | SSHTimeout: Connection to the
  172.24.4.69 via SSH timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404745] [NEW] cloud-init's growfs/resize fails with gpart dependency on FreeBSD

2014-12-21 Thread Rick
Public bug reported:

I've generated a FreeBSD qcow2 including base and kernel with cloud-init
and dependencies. cloud-init runs when the instance boots, but the
growpart module fails due to, what appears to be, two separate problems.

One of the dependencies is the gpart port/pkg which, incidentally, is a
reason it is failing[1]. The growpart module, for a yet unknown reason,
is calling /usr/local/sbin/gpart with arguments that are valid only with
FreeBSD's /sbin/gpart. cloud-init executes /sbin/gpart when the
dependent /usr/local/sbin/gpart is removed that subsequently results in
the next failure which I think is caused by cloud-init itself...

cloud-init executes growfs, via resizefs, after successful execution of
gpart recover/resize. The logs[2] illustrate the growfs command as
growfs /dev/vtbd0p2. By default, FreeBSD's growfs runs interactively
asking a question which can be mitigated using the '-y' command line
option. The logs indicate a successful growfs operation, but df doesn't
reflect it. I suspect the this is due to the default interactive nature
of growfs.

[1] http://hostileadmin.com/images/cloud_init_gpart_fail.jpg
[2] http://hostileadmin.com/images/cloud_init_growfs_fail.jpg

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: cloud-init freebsd gpart

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1404745

Title:
  cloud-init's growfs/resize fails with gpart dependency on FreeBSD

Status in Init scripts for use on cloud images:
  New

Bug description:
  I've generated a FreeBSD qcow2 including base and kernel with cloud-
  init and dependencies. cloud-init runs when the instance boots, but
  the growpart module fails due to, what appears to be, two separate
  problems.

  One of the dependencies is the gpart port/pkg which, incidentally, is
  a reason it is failing[1]. The growpart module, for a yet unknown
  reason, is calling /usr/local/sbin/gpart with arguments that are valid
  only with FreeBSD's /sbin/gpart. cloud-init executes /sbin/gpart when
  the dependent /usr/local/sbin/gpart is removed that subsequently
  results in the next failure which I think is caused by cloud-init
  itself...

  cloud-init executes growfs, via resizefs, after successful execution
  of gpart recover/resize. The logs[2] illustrate the growfs command as
  growfs /dev/vtbd0p2. By default, FreeBSD's growfs runs interactively
  asking a question which can be mitigated using the '-y' command line
  option. The logs indicate a successful growfs operation, but df
  doesn't reflect it. I suspect the this is due to the default
  interactive nature of growfs.

  [1] http://hostileadmin.com/images/cloud_init_gpart_fail.jpg
  [2] http://hostileadmin.com/images/cloud_init_growfs_fail.jpg

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1404745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404755] [NEW] midonet_lib.py contains a typo'd format parameter

2014-12-21 Thread Angus Lees
Public bug reported:

In add_static_nat(...):

LOG.debug("MidoClient.add_static_nat called: "
  "tenant_id=%(tenant_id)s, chain_name=%(chain_name)s, "
  "from_ip=%(from_ip)s, to_ip=%(to_ip)s, "
  "port_id=%(port_id)s, nat_type=%(nat_type)s",
  {'tenant_id': tenant_id, 'chain_name': chain_name,
   'from_ip': from_ip, 'to_ip': to_ip,
   'portid': port_id, 'nat_type': nat_type})

Note port_id vs portid.  This line of code will raise a KeyError if
debug logging is enabled.

** Affects: neutron
 Importance: Undecided
 Assignee: Angus Lees (gus)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404755

Title:
  midonet_lib.py contains a typo'd format parameter

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In add_static_nat(...):

  LOG.debug("MidoClient.add_static_nat called: "
"tenant_id=%(tenant_id)s, chain_name=%(chain_name)s, "
"from_ip=%(from_ip)s, to_ip=%(to_ip)s, "
"port_id=%(port_id)s, nat_type=%(nat_type)s",
{'tenant_id': tenant_id, 'chain_name': chain_name,
 'from_ip': from_ip, 'to_ip': to_ip,
 'portid': port_id, 'nat_type': nat_type})

  Note port_id vs portid.  This line of code will raise a KeyError if
  debug logging is enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404764] [NEW] share storage live migration fails at _post_live_migration() function, but the status of this instance is still "migrating".

2014-12-21 Thread Rong Han ZTE
Public bug reported:

A share storage live-migration failed at function of "
_post_live_migration() " because  umount command failed, but the status
of this instance is still migrating.

Log is as follows:

2014-12-19 16:45:32.741 6127 INFO nova.compute.manager [-] [instance: 
e9fab51d-8e13-416b-b2c9-211e04ba35b2] _post_live_migration() is started..
2014-12-19 16:45:32.779 6127 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): 10.47.158.165
2014-12-19 16:45:32.845 6127 INFO nova.compute.manager [-] [instance: 
e9fab51d-8e13-416b-b2c9-211e04ba35b2] During sync_power_state the instance has 
a pending task. Skip.
2014-12-19 16:45:33.041 6127 ERROR nova.openstack.common.loopingcall [-] in 
fixed duration looping call
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall Traceback 
(most recent call last):
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py", line 
78, in _inner
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4987, in 
wait_for_live_migration
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
migrate_data)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
payload)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall return 
f(self, context, *args, **kw)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 369, in 
decorated_function
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall e, 
sys.exc_info())
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 356, in 
decorated_function
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall return 
function(self, context, *args, **kwargs)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4826, in 
_post_live_migration
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
migrate_data)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5185, in 
post_live_migration
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
self._umount_instance_sysdisk(instance)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2214, in 
_umount_instance_sysdisk
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
utils.execute('umount', mount_path, run_as_root=True)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall return 
processutils.execute(*cmd, **kwargs)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 
195, in execute
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
cmd=sanitized_cmd)
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall 
ProcessExecutionError: Unexpected error while running command.
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall Command: 
sudo nova-rootwrap /etc/nova/rootwrap.conf umount 
/var/lib/nova/instances/e9fab51d-8e13-416b-b2c9-211e04ba35b2/sysdisk
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall Exit code: 
32
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall Stdout: u''
2014-12-19 16:45:33.041 6127 TRACE nova.openstack.common.loopingcall Stderr: 
u'umount: /var/lib

[Yahoo-eng-team] [Bug 1403136] Re: Create tenants, users, and roles in OpenStack Installation Guide for Ubuntu 14.04  - juno

2014-12-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/143215
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=14e6c86d5a457dbbb90690d55655a4532919255a
Submitter: Jenkins
Branch:master

commit 14e6c86d5a457dbbb90690d55655a4532919255a
Author: Matthew Kassawara 
Date:   Fri Dec 19 16:30:53 2014 -0600

Fix conflicts with _member_ role creation

Historically, the installation guide manually created the
internal _member_ role to resolve issues with horizon.
However, keystone will preferably create the _member_ role
automatically if the 'user-create' command includes the
'--tenant' option.

Change-Id: I1a67db2b6aa6a8e2bfd76cc80db1fb09fa353986
Closes-Bug: #1403136
backport: juno


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1403136

Title:
  Create tenants, users, and roles in OpenStack Installation Guide for
  Ubuntu 14.04  - juno

Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Manuals:
  Fix Released

Bug description:
  "e. By default, the dashboard limits access to users with the _member_
  role. Create the _member_ role:"

  The first sentence is true, but keystone will automatically create the
  _member_ role if it does not exist.

  I discovered this while tracking down an error:  "keystone user-
  create" resulted in a "duplicate entry" error. The sequence is like
  this:

  1) As described in the doc, I run "keystone role-create --name _member_". The 
role is created and assigned a random ID.
  2) On "user-create", keystone wants to assign the _member_ role to the new 
user. It looks up member_role_id in keystone.conf, finds none (the 
member_role_id does not match the ID from step 1)
  3) keystone now tries to create the _member_ role, but this fails since the 
name already exists.

  So by not creating the "_member_" role myself, the problem is averted.
  That's why I'm opening a bug against docs another fix would be for
  keystone to do the lookup by name instead, but I assume the keystone
  team has a good reason for not doing so.

  I'm using the v2 API with SQL backend.

  ---
  Built: 2014-12-09T01:28:32 00:00
  git SHA: 6d3c276487be990722bc423642ffb05217d77289
  URL: 
http://docs.openstack.org/juno/install-guide/install/apt/content/keystone-users.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/install-guide/section_keystone-users.xml
  xml:id: keystone-users

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1403136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404782] [NEW] ml2: superfluous %s in LOG.debug() format

2014-12-21 Thread Angus Lees
Public bug reported:

ml2.db.get_dynamic_segment() includes this line:

   LOG.debug("No dynamic segment %s found for "
 "Network:%(network_id)s, "
 "Physical network:%(physnet)s, "
 "segmentation_id:%(segmentation_id)s",
 {'network_id': network_id,
  'physnet': physical_network,
  'segmentation_id': segmentation_id})

Note the superfluous %s in the format string.  At run-time, %s prints
the args hash again and doesn't cause an error, but this is clearly
unintended.

** Affects: neutron
 Importance: Undecided
 Assignee: Angus Lees (gus)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404782

Title:
  ml2: superfluous %s in LOG.debug() format

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  ml2.db.get_dynamic_segment() includes this line:

 LOG.debug("No dynamic segment %s found for "
 "Network:%(network_id)s, "
 "Physical network:%(physnet)s, "
 "segmentation_id:%(segmentation_id)s",
 {'network_id': network_id,
  'physnet': physical_network,
  'segmentation_id': segmentation_id})

  Note the superfluous %s in the format string.  At run-time, %s prints
  the args hash again and doesn't cause an error, but this is clearly
  unintended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404785] [NEW] Cisco: logging incorrectly called with (fmt, arg) tuple

2014-12-21 Thread Angus Lees
Public bug reported:

cisco.db.n1kv_db_v2._validate_segment_range_uniqueness() includes these
lines:

   msg = (_("NetworkProfile name %s already exists"),
  net_p["name"])
   LOG.error(msg)
   raise n_exc.InvalidInput(error_message=msg)

As written, msg is a tuple, and the various logging lines below print
the tuple members without properly expanding the format string as
intended.

The format in msg should have been expanded using % - as was
presumably the intention.  There are few other examples of this
elsewhere in this file.

** Affects: neutron
 Importance: Undecided
 Assignee: Angus Lees (gus)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404785

Title:
  Cisco: logging incorrectly called with (fmt, arg) tuple

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  cisco.db.n1kv_db_v2._validate_segment_range_uniqueness() includes these
  lines:

 msg = (_("NetworkProfile name %s already exists"),
net_p["name"])
 LOG.error(msg)
 raise n_exc.InvalidInput(error_message=msg)

  As written, msg is a tuple, and the various logging lines below print
  the tuple members without properly expanding the format string as
  intended.

  The format in msg should have been expanded using % - as was
  presumably the intention.  There are few other examples of this
  elsewhere in this file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404788] [NEW] Should use lazy logging interpolation

2014-12-21 Thread Angus Lees
Public bug reported:

There are a small number of examples of "eager" interpolation in
neutron:
  logging.debug("foo %s" % arg)

These should be converted to perform the interpolation lazily within
the logging function, since if the severity is below the logging level
then the interpolation can be skipped entirely.

This bug is a grab bag of all such current examples found in neutron
via a pylint test.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404788

Title:
  Should use lazy logging interpolation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There are a small number of examples of "eager" interpolation in
  neutron:
logging.debug("foo %s" % arg)

  These should be converted to perform the interpolation lazily within
  the logging function, since if the severity is below the logging level
  then the interpolation can be skipped entirely.

  This bug is a grab bag of all such current examples found in neutron
  via a pylint test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404791] [NEW] can not delete an instance if the instance's rescue volume is not found

2014-12-21 Thread Eli Qiao
Public bug reported:

can not delete an instance if the instance's rescue lvm can not be
found.

how to reproduce:

1.  configure images_type lvm for libvirt dirver.
[libvirt]
images_type = lvm < 
images_volume_group = stack-volumes-lvmdriver-1 <-- lvm used

2. rescue the instance, will generate uuid.rescue lvm
3. unrescue the instance, the rescue lvm can not be deleted due to bug   
https://bugs.launchpad.net/nova/+bug/1385480
4. delete the uuid.rescue lvm manually.
5. delete the instance(failed and set to error state)

below is the call trace of nova-compute driver.

2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
self._cleanup_lvm(instance)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 944, in _cleanup_lvm
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
lvm.remove_volumes(disks)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 272, in remove_volumes
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
clear_volume(path)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 250, in clear_volume
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher volume_size = 
get_volume_size(path)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 66, in decorated_function
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher return 
function(path)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 197, in get_volume_size
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
run_as_root=True)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/utils.py", line 53, in execute
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher return 
utils.execute(*args, **kwargs)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/utils.py", line 164, in execute
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher return 
processutils.execute(*cmd, **kwargs)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 224, 
in execute
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
cmd=sanitized_cmd)
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
ProcessExecutionError: Unexpected error while running command.
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 
/dev/stack-volumes-lvmdriver-1/b09687ee-f525-4edc-aaf4-1272562d46fd_disk.rescue
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Exit code: 1
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Stdout: u''
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Stderr: u'blockdev: 
cannot open 
/dev/stack-volumes-lvmdriver-1/b09687ee-f525-4edc-aaf4-1272562d46fd_disk.rescue:
 No such file or directory\n'
2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Eli Qiao (taget-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404791

Title:
  can not delete an instance if the instance's rescue volume is not
  found

Status in OpenStack Compute (Nova):
  New

Bug description:
  can not delete an instance if the instance's rescue lvm can not be
  found.

  how to reproduce:

  1.  configure images_type lvm for libvirt dirver.
  [libvirt]
  images_type = lvm < 
  images_volume_group = stack-volumes-lvmdriver-1 <-- lvm used

  2. rescue the instance, will generate uuid.rescue lvm
  3. unrescue the instance, the rescue lvm can not be deleted due to bug   
https://bugs.launchpad.net/nova/+bug/1385480
  4. delete the uuid.rescue lvm manually.
  5. delete the instance(failed and set to error state)

  below is the call trace of nova-compute driver.

  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
self._cleanup_lvm(instance)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 944, in _cleanup_lvm
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
lvm.remove_volumes(disks)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 272, in remove_volumes
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
clear_volume(path)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/st

[Yahoo-eng-team] [Bug 1332536] Re: ImageBusy: error removing image. when evacuate on ceph backed volume

2014-12-21 Thread ChangBo Guo(gcb)
I met similar  issue recently,   with share storage ceph backend.
'ImageBusy: error removing image'  just was  another exception  when
clean up  instance on target host  after evacuate failure .   The image
is used by the original instance from ceph side,  so ceph think it was
used .  In normal  evacuate workflow  we don't need clean up  imag ,so
the root cause is  why the evacuate failure .In my test env ,I hit
the bug https://bugs.launchpad.net/neutron/+bug/1357476,

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332536

Title:
  ImageBusy: error removing image. when evacuate on ceph backed volume

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
   icehouse
  Ceph as a backend for glance and cinder

   when evacuate  an instance  from failed host to another. the command
  fails.

  2014-06-20 20:21:36.430 12362 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: error removing image
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 327, in 
decorated_function
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 303, in 
decorated_function
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 290, in 
decorated_function
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2251, in 
terminate_instance
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
do_terminate_instance(instance, bdms)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2249, in 
do_terminate_instance
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher 
self._set_instance_error_state(context, instance['uuid'])
  2014-06-20 20:21:36.430 12362 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68,

[Yahoo-eng-team] [Bug 1404795] [NEW] instance's host and node are target host's while evacuate failed

2014-12-21 Thread ChangBo Guo(gcb)
Public bug reported:

   Evacuate provide a way to recover instance from a failed compute node,
compute manager changes instance's host and node name with target host
before do real action '_rebuild_default_impl', we didn't catch exception
from _rebuild_default_impl, any evacuate failure leaves instance's host
and node name as target host's.  The worse thing is when restart the 
original failure node
   it check instance's host and  destory them in original  node.   We need 
recover instance's
   attributes host and node.

** Affects: nova
 Importance: Undecided
 Assignee: ChangBo Guo(gcb) (glongwave)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404795

Title:
  instance's host and node are target host's while evacuate failed

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
 Evacuate provide a way to recover instance from a failed compute node,
  compute manager changes instance's host and node name with target host
  before do real action '_rebuild_default_impl', we didn't catch exception
  from _rebuild_default_impl, any evacuate failure leaves instance's host
  and node name as target host's.  The worse thing is when restart the 
original failure node
 it check instance's host and  destory them in original  node.   We need 
recover instance's
 attributes host and node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404801] [NEW] Unshelve instance not working if instance is boot from volume

2014-12-21 Thread Abhishek Kekane
Public bug reported:

If instance is booted from volume, then shelving the instance sets the
status as SHELVED_OFFLOADED, instance files are getting deleted properly
from the base path. When you call the unshelve instance, it fails on the
conductor with error "Unshelve attempted but the image_id is not
provided", and instance goes in to error state.

Steps to reproduce:
---

1. Log in to Horizon, create a new volume.
2. Create an Instance using newly created volume.
3. Verify instance is in active state.
$ source devstack/openrc demo demo
$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running   
  | private=10.0.0.3 |
+--+--+++-+--+

4. Shelve the instance
$ nova shelve 

5. Verify the status is SHELVED_OFFLOADED.
$ nova list
+--+--+---++-+--+
| ID   | Name | Status| Task State 
| Power State | Networks |
+--+--+---++-+--+
| dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | -  
| Shutdown| private=10.0.0.3 |
+--+--+---++-+--+

6. Unshelve the instance.
$ nova unshelve 

Following stack-trace logged in nova-conductor

2014-12-19 02:55:59.634 ERROR nova.conductor.manager 
[req-a071fbc9-1c23-4e7a-8adf-7b3d0951aadf demo demo] [instance: 
dae3a13b-6aa8-4794-93cd-5ab7bf90f604] Unshelve attempted but the image_id is 
not provided
2014-12-19 02:55:59.647 ERROR oslo.messaging.rpc.dispatcher 
[req-a071fbc9-1c23-4e7a-8adf-7b3d0951aadf demo demo] Exception during message 
handling: Error during unshelve instance dae3a13b-6aa8-4794-93cd-5ab7bf90f604: 
Unshelve attempted but the image_id is not provided
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
137, in _dispatch_and_reply
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
180, in _dispatch
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
126, in _do_dispatch
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/conductor/manager.py", line 727, in unshelve_instance
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher 
instance_id=instance.uuid, reason=reason)
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher UnshelveException: 
Error during unshelve instance dae3a13b-6aa8-4794-93cd-5ab7bf90f604: Unshelve 
attempted but the image_id is not provided
2014-12-19 02:55:59.647 TRACE oslo.messaging.rpc.dispatcher

7. Instance goes into error state.
$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ERROR  | unshelving | Shutdown  
  | private=10.0.0.3 |
+--+--+++-+--+

Note:
1. This issue is reproducible with admin as well as demo tenant.
2. In all cases of shelved_offload_time values (-1, 0, > 0) this issue is 
reproducible.

** Affects: nova
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New


** Tags: ntt

** Changed in: nova
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404801

Title:
  Unshelve instance not working if instance is boot from volume

Status in OpenStack Compu