[Bug 1505473] [NEW] pollen does not start on boot

2015-10-12 Thread Paul Collins
Public bug reported:

ubuntu@keeton:~$ lsb_release -rc
Release:14.04
Codename:   trusty
ubuntu@keeton:~$ dpkg-query -W pollen
pollen  4.11-0ubuntu1
ubuntu@keeton:~$ _

pollen does not start on boot, due to an error in the upstart config:

ubuntu@keeton:~$ grep start /etc/init/pollen.conf 
start on start on runlevel [2345]
stop on start on runlevel [!2345]
# Ensure our device exists, and is a character device, before starting 
our server
ubuntu@keeton:~$ _

Deleting the second "start on" in each line seems to fix this.

** Affects: pollen (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to pollen in Ubuntu.
https://bugs.launchpad.net/bugs/1505473

Title:
  pollen does not start on boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pollen/+bug/1505473/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1367547] [NEW] qemu-img convert -O raw is broken in trusty

2014-09-09 Thread Paul Collins
Public bug reported:

qemu-img convert -O raw with yields a file of the correct length with no
content, e.g.:

$ qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img 
trusty-server-cloudimg-amd64-disk1.raw
$ ls -l trusty-*
-rw-rw-r-- 1 paul paul  255590912 Sep 10 09:14 
trusty-server-cloudimg-amd64-disk1.img
-rw-r--r-- 1 paul paul 2361393152 Sep 10 15:13 
trusty-server-cloudimg-amd64-disk1.raw
$ du -sh trusty-*
244Mtrusty-server-cloudimg-amd64-disk1.img
0   trusty-server-cloudimg-amd64-disk1.raw
$ dpkg-query -W qemu-utils
qemu-utils  2.0.0+dfsg-2ubuntu1.3
$ lsb_release -c
Codename:   trusty
$ _

qemu-img in precise and in utopic both seem to work correctly, e.g.:

$ qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img 
trusty-server-cloudimg-amd64-disk1.raw
$ ls -l trusty-*
-rw-r--r-- 1 paul paul  255590912 Sep  9 21:14 
trusty-server-cloudimg-amd64-disk1.img
-rw-r--r-- 1 paul paul 2361393152 Sep 10 03:16 
trusty-server-cloudimg-amd64-disk1.raw
$ du -sh trusty-*
244Mtrusty-server-cloudimg-amd64-disk1.img
801Mtrusty-server-cloudimg-amd64-disk1.raw
$ dpkg-query -W qemu-utils
qemu-utils  2.1+dfsg-4ubuntu1
$ lsb_release -c
Codename:   utopic
$ _

** Affects: qemu (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: qemu (Ubuntu Trusty)
 Importance: Undecided
 Status: New

** Also affects: qemu (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu in Ubuntu.
https://bugs.launchpad.net/bugs/1367547

Title:
  qemu-img convert -O raw is broken in trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1367547/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1367547] Re: qemu-img convert -O raw is broken in trusty

2014-09-09 Thread Paul Collins
I can't reproduce this in a freshly created trusty VM, so this may be
something strange with my machine. Marking Invalid for now.

** Changed in: qemu (Ubuntu)
   Status: New = Invalid

** Changed in: qemu (Ubuntu Trusty)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu in Ubuntu.
https://bugs.launchpad.net/bugs/1367547

Title:
  qemu-img convert -O raw is broken in trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1367547/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1353269] [NEW] celery worker crashes on startup when python-librabbitmq is used

2014-08-05 Thread Paul Collins
Public bug reported:

On a vanilla trusty install with rabbitmq-server and celeryd installed,
celery worker crashes as follows:

$ dpkg-query -W python-librabbitmq rabbitmq-server python-amqp librabbitmq1 
celeryd
celeryd 3.1.6-1ubuntu1
librabbitmq10.4.1-1
python-amqp 1.3.3-1ubuntu1
python-librabbitmq  1.0.3-0ubuntu1
rabbitmq-server 3.2.4-1
$ celery worker
[2014-08-06 05:19:00,142: WARNING/MainProcess] 
/usr/lib/python2.7/dist-packages/celery/apps/worker.py:159: 
CDeprecationWarning: 
Starting from version 3.2 Celery will refuse to accept pickle by default.

The pickle serializer is a security concern as it may give attackers
the ability to execute any command.  It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.

If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::

CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']

You must only enable the serializers that you will actually use.


  warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
 
 -- celery@juju-pjdc-lcy02-machine-4 v3.1.6 (Cipater)
  - 
--- * ***  * -- Linux-3.13.0-32-generic-x86_64-with-Ubuntu-14.04-trusty
-- * -  --- 
- ** -- [config]
- ** -- . broker:  amqp://guest@localhost:5672//
- ** -- . app: default:0x7ff5367585d0 (.default.Loader)
- ** -- . concurrency: 1 (prefork)
- *** --- * --- . events:  OFF (enable -E to monitor this worker)
-- ***  
--- * - [queues]
 -- . celery   exchange=celery(direct) key=celery


Segmentation fault (core dumped)

If I remove python-librabbitmq so that celery falls back to python-amqp,
celery worker starts up and works correctly.

I have also discovered that a python-rabbitmq built from version 1.5.2
sources with its embedded copy of rabbitmq-c 0.5.0 also works correctly.

I have attached a backtrace but it is not too useful as python-
librabbitmq does not appear to make debug symbols available, and it
occupies the second and third frames in the stack when the crash
happens.

** Affects: python-librabbitmq (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: celery-vs-python-librabbitmq-backtrace.txt
   
https://bugs.launchpad.net/bugs/1353269/+attachment/4170851/+files/celery-vs-python-librabbitmq-backtrace.txt

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-librabbitmq in Ubuntu.
https://bugs.launchpad.net/bugs/1353269

Title:
  celery worker crashes on startup when python-librabbitmq is used

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-librabbitmq/+bug/1353269/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1028268] Re: Bareword dns domain makes facter return incorrect info

2013-12-17 Thread Paul Collins
** Also affects: facter (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: facter (Ubuntu Precise)
   Status: New = Confirmed

** Changed in: facter (Ubuntu Precise)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to facter in Ubuntu.
https://bugs.launchpad.net/bugs/1028268

Title:
  Bareword dns domain makes facter return incorrect info

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/facter/+bug/1028268/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2013-01-24 Thread Paul Collins
** Also affects: puppet (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: puppet (Ubuntu Precise)
   Status: New = Confirmed

** Changed in: puppet (Ubuntu Precise)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/995719

Title:
  process_name.rb removed in 2.7.11 but still provided by puppet-common

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/995719/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2013-01-24 Thread Paul Collins
** Description changed:

  Hi,
  
  This is related to
  https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/959597 where
  upstream has removed process_name.rb in 2.7.11 but it is still packaged
  and provided by puppet-common.
  
  [ This plugin frequently causes puppet to hang and requires manual
- sysadmin intervention to resolve. -- pjdc, 2011-05-10 ]
+ sysadmin intervention to resolve. -- pjdc, 2012-05-10 ]
  
  Source tarball for 2.7.10 from puppetlabs:
  
  [hloeung@darkon puppet-2.7.10]$ find . -type f -name '*process_name*'
  ./spec/unit/util/instrumentation/listeners/process_name_spec.rb
  ./lib/puppet/util/instrumentation/listeners/process_name.rb
  [hloeung@darkon puppet-2.7.10]$
  
  Source tarball for 2.7.11 from puppetlabs:
  
  [hloeung@darkon puppet-2.7.11]$ find . -type f -name '*process_name*'
  [hloeung@darkon puppet-2.7.11]$
  
  [hloeung@darkon puppet-2.7.10]$ dpkg-query -S 
/usr/lib/ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb
  puppet-common: 
/usr/lib/ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb
  
  [hloeung@darkon puppet-2.7.10]$ dpkg -l puppet-common
  Desired=Unknown/Install/Remove/Purge/Hold
  | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
  |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
  ||/ Name   Version
Description
  
+++-==-==-
  ii  puppet-common  2.7.11-1ubuntu2
Centralized configuration management

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/995719

Title:
  process_name.rb removed in 2.7.11 but still provided by puppet-common

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/995719/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1104691] [NEW] migrating from nova-volume is non-obvious

2013-01-24 Thread Paul Collins
Public bug reported:

Today we upgraded an Openstack cloud from essex to folsom to grizzly.
Switching from nova-volume to cinder was somewhat non-trivial.  I'm not
sure how much help is reasonable to expect from the Ubuntu packaging,
but it seems that perhaps some of the steps involved
(http://wiki.openstack.org/MigrateToCinder is the guide we ended up
using) could be handled there.

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: cinder (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Description changed:

- Today we upgraded an Openstack cloud from essex to grizzly.  Switching
- from nova-volume to cinder was somewhat non-trivial.  I'm not sure how
- much help is reasonable to expect from the Ubuntu packaging, but it
- seems that perhaps some of the steps involved
+ Today we upgraded an Openstack cloud from essex to folsom to grizzly.
+ Switching from nova-volume to cinder was somewhat non-trivial.  I'm not
+ sure how much help is reasonable to expect from the Ubuntu packaging,
+ but it seems that perhaps some of the steps involved
  (http://wiki.openstack.org/MigrateToCinder the guide we ended up using)
  could be handled there.

** Description changed:

  Today we upgraded an Openstack cloud from essex to folsom to grizzly.
  Switching from nova-volume to cinder was somewhat non-trivial.  I'm not
  sure how much help is reasonable to expect from the Ubuntu packaging,
  but it seems that perhaps some of the steps involved
- (http://wiki.openstack.org/MigrateToCinder the guide we ended up using)
- could be handled there.
+ (http://wiki.openstack.org/MigrateToCinder is the guide we ended up
+ using) could be handled there.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to cinder in Ubuntu.
https://bugs.launchpad.net/bugs/1104691

Title:
  migrating from nova-volume is non-obvious

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1104691/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1098320] Re: ceph: default crush rule does not suit multi-OSD deployments

2013-01-22 Thread Paul Collins
From my point of view, probably not very. We (= Canonical IS) are
running 12.04 LTS plus packages from the Ubuntu Cloud Archive. I don't
believe we'll do many more folsom+argonaut deployments before
grizzly+bobtail arrives, and in any case it's sufficiently well
documented internally that it's not a big problem for us.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098320

Title:
  ceph: default crush rule does not suit multi-OSD deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098320/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1098320] Re: ceph: default crush rule does not suit multi-OSD deployments

2013-01-21 Thread Paul Collins
This has been fixed on upstream's master branch by commit
c236a51a8040508ee893e4c64b206e40f9459a62 and cherry-picked to the
bobtail branch as 6008b1d8e4587d5a3aea60684b1d871401496942.  The change
does not seem to have been applied to argonaut.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098320

Title:
  ceph: default crush rule does not suit multi-OSD deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098320/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1098314] [NEW] pg_num inappropriately low on new pools

2013-01-10 Thread Paul Collins
Public bug reported:

Version: 0.48.2-0ubuntu2~cloud0

On a Ceph cluster with 18 OSDs, new object pools are being created with
a pg_num of 8.  Upstream recommends that there be more like 100 or so
PGs per OSD: http://article.gmane.org/gmane.comp.file-
systems.ceph.devel/10242

I've worked around this by removing and recreating the pools with a
higher pg_num before we started using the cluster, but since we aim for
fully automated deployment (using Juju and MaaS) this is suboptimal.

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098314

Title:
  pg_num inappropriately low on new pools

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098314/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1098320] [NEW] ceph: default crush rule does not suit multi-OSD deployments

2013-01-10 Thread Paul Collins
Public bug reported:

Version: 0.48.2-0ubuntu2~cloud0

Our Ceph deployments typically involve multiple OSDs per host with no
disk redundancy. However the default crush rules appears to distribute
by OSD, not by host, which I believe will not prevent replicas from
landing on the same host.

I've been working around this by updating the crush rules as follows and
installing the resulting crushmap in the cluster, but since we aim for
fully automated deployment (using Juju and MaaS) this is suboptimal.

--- crushmap.txt2013-01-10 20:33:21.265809301 +
+++ crushmap.new2013-01-10 20:32:49.496745778 +
@@ -104,7 +104,7 @@
min_size 1
max_size 10
step take default
-   step choose firstn 0 type osd
+   step chooseleaf firstn 0 type host
step emit
 }
 rule metadata {
@@ -113,7 +113,7 @@
min_size 1
max_size 10
step take default
-   step choose firstn 0 type osd
+   step chooseleaf firstn 0 type host
step emit
 }
 rule rbd {
@@ -122,7 +122,7 @@
min_size 1
max_size 10
step take default
-   step choose firstn 0 type osd
+   step chooseleaf firstn 0 type host
step emit
 }

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

** Also affects: ceph (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098320

Title:
  ceph: default crush rule does not suit multi-OSD deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098320/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1065883] Re: ceph rbd username and secret should be configured in nova-compute, not passed from nova-volume/cinder

2012-12-18 Thread Paul Collins
Is there an essex variant of this patch available?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1065883

Title:
  ceph rbd username and secret should be configured in nova-compute, not
  passed from nova-volume/cinder

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1065883/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1091939] [NEW] nova-network applies too liberal a SNAT rule

2012-12-18 Thread Paul Collins
Public bug reported:

Version: 2012.1.3+stable-20120827-4d2a4afe-0ubuntu1

We recently set up a new Nova cluster on precise + essex with Juju and
MaaS, and ran into a problem where instances could not communicate with
the swift-proxy node on the MaaS network.  This turned out to be due to
nova-network installing a SNAT rule for the cluster's public IP that
applied to all network traffic, not just that traffic destined to exit
towards the Internet.

This problem has been fixed upstream in
https://github.com/openstack/nova/commit/959c93f6d3572a189fc3fe73f1811c12323db857

Please consider applying this change to Ubuntu 12.04 LTS in an SRU.

** Affects: nova (Ubuntu)
 Importance: High
 Status: New

** Affects: nova (Ubuntu Precise)
 Importance: High
 Status: New


** Tags: canonistack

** Also affects: nova (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Precise)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1091939

Title:
  nova-network applies too liberal a SNAT rule

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1091939/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-08-08 Thread Paul Collins
Upstream has addressed this problem by ensuring that mkcephfs always
creates keyrings so that cephx can easily be enabled later.

http://mid.gmane.org/alpine.deb.2.00.1208081405110.3...@cobra.newdream.net

https://github.com/ceph/ceph/commit/96b1a496cdfda34a5efdb6686becf0d2e7e3a1c0

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1026402

Title:
  mon cluster (no cephx) fails to start unless empty keyring files are
  created

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1026402/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1032405] [NEW] RBDDriver does not support volume creation from snapshots

2012-08-02 Thread Paul Collins
Public bug reported:

I've been doing a little work with Nova and Ceph. As part of this work
I've been testing snapshots. I've discovered that RBDDriver does not
implement create_volume_from_snapshot(). Attempts to create volumes from
snapshots instead fall through to VolumeDriver's LVM-based
implementation and then fail.

Attached is a patch against essex that implements this functionality. I
have tested it lightly with a Ceph cluster running 0.48, the stable
argonaut release. (Lightly = successfully created a volume from a
snapshot, and then removed it.)

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1032405

Title:
  RBDDriver does not support volume creation from snapshots

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1032405/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1032405] Re: RBDDriver does not support volume creation from snapshots

2012-08-02 Thread Paul Collins
** Patch added: implement RBDDriver.create_volume_from_snapshot()
   
https://bugs.launchpad.net/bugs/1032405/+attachment/3246430/+files/rbd-implement-create-volume-from-snapshot.patch

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1032405

Title:
  RBDDriver does not support volume creation from snapshots

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1032405/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1032405] Re: RBDDriver does not support volume creation from snapshots

2012-08-02 Thread Paul Collins
** Also affects: nova (Ubuntu Precise)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1032405

Title:
  RBDDriver does not support volume creation from snapshots

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1032405/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-26 Thread Paul Collins
I took a look at this last night and wrote a patch, which seems to work
on my test cluster.

http://article.gmane.org/gmane.comp.file-systems.ceph.devel/8170

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1026402

Title:
  mon cluster (no cephx) fails to start unless empty keyring files are
  created

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1026402/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-24 Thread Paul Collins
Hi James,

Not until very recently — I've just posted my report to ceph-devel.
Sorry for the delay!

Paul

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1026402

Title:
  mon cluster (no cephx) fails to start unless empty keyring files are
  created

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1026402/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1028718] [NEW] nova volumes are inappropriately clingy for ceph

2012-07-24 Thread Paul Collins
Public bug reported:

I've been doing a little work with nova-volume and ceph/RBD.  In order
to gain some fault-tolerance, I plan to run a nova-volume on each
compute node.  However, a problem arises, because a given nova-volume
host only wants to deal with requests for volumes that it created.

This makes perfect sense in a world where nova-volume hosts create
volumes in LVM and export them over iSCSI.  It makes less sense in a
Ceph world, since the volumes live in the ceph cluster, and their
metadata live in the nova database.  But if the wrong nova-volume goes
away, some of my volumes become arbitrarily unusable.

I've hit upon a workaround that seems to work so far, although I'm not
sure if it's supposed to.  I am running each nova-volume on the various
hosts with an identical --host flag.  When running in this setup, rapid
volume creation, deletion and attachment requests are splayed nicely
across the nova-volume instances.

(A less brutal hack might be to teach nova-volume to call into the
volume driver to check if it has its own notion of what the host flag
ought to be -- the RBD driver, for example, could construct a string
such as ceph:67670443-07ad-4ce3-bdb8-75e9a14562f9:rbd by probing the
Ceph cluster for its fsid, which ought to be unique, and then appending
the name of the RADOS pool in which it is creating RBDs.)

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1028718

Title:
  nova volumes are inappropriately clingy for ceph

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1028718/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-19 Thread Paul Collins
Huh, interesting.  Your log has this line

2012-07-19 10:16:10.235911 7f9e20d22780 1 mon.a@-1(probing) e1 copying
mon. key from old db to external keyring

which I wonder what it means. Maybe it's plucking a key from a previous
cephx-enabled install from an undisclosed location?

Anyway, it certainly sounds like talking to upstream is the next logical
step.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1026402

Title:
  mon cluster (no cephx) fails to start unless empty keyring files are
  created

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1026402/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026402] [NEW] mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-18 Thread Paul Collins
Public bug reported:

I'm running a 3-node test cluster on 12.04, without cephx
authentication. I started out running 0.47.2 (an impatiently-smashed-
together backport based on the upstream sources) and then upgraded to
0.48-1ubuntu1 (the packages from quantal rebuilt on precise). So my
situation may be a bit special.

When I upgraded from 0.47.2 to 0.48, I didn't notice that my first
monitor daemon hadn't restarted properly.  I rolled through the upgrade
and ended up with a system where ceph -s would hang, being unable to
find a monitor willing to accept responsibility for the cluster.  I
splashed around rather a lot turning on debug logging. The monitors
tended to get as far as

2012-07-17 02:38:52.254856 7f3c3b862780 -1 auth: error reading file: 
/srv/ceph/mon.leningradskaya/keyring: can't open 
/srv/ceph/mon.leningradskaya/keyring: (2) No such file or directory
2012-07-17 02:38:52.254874 7f3c3b862780 -1 mon.leningradskaya@-1(probing) e1 
unable to load initial keyring 
/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin
2012-07-17 02:38:53.006423 7f3c3b860700  1 -- 10.55.200.21:6789/0  :/0 
pipe(0x7f3c2c0008c0 sd=17 pgs=0 cs=0 l=0).accept sd=17
2012-07-17 02:38:53.231137 7f3c386a1700  1 -- 10.55.200.21:6789/0  :/0 
pipe(0x7f3c2c000f60 sd=18 pgs=0 cs=0 l=0).accept sd=18
2012-07-17 02:38:53.308857 7f3c3849f700  1 -- 10.55.200.21:6789/0  :/0 
pipe(0x7f3c2c0015c0 sd=19 pgs=0 cs=0 l=0).accept sd=19
2012-07-17 02:38:53.668990 7f3c3829d700  1 -- 10.55.200.21:6789/0  :/0 
pipe(0x7f3c2c001c20 sd=20 pgs=0 cs=0 l=0).accept sd=20

with lines like the last four streaming endlessly.  Eventually I tried
creating /srv/ceph/mon.leningradskaya/keyring and the monitor daemon
started right up. When I applied the same change to the rest of the
cluster, I was back in business. Here's a log snippet from a successful
0.48 monitor daemon startup:

2012-07-17 02:47:03.036077 7f5f2a66f780  2 auth: KeyRing::load: loaded key file 
/srv/ceph/mon.leningradskaya/keyring
2012-07-17 02:47:03.036283 7f5f2a66f780 10 mon.leningradskaya@-1(probing) e1 
bootstrap
2012-07-17 02:47:03.036319 7f5f2a66f780 10 mon.leningradskaya@-1(probing) e1 
unregister_cluster_logger - not registered
2012-07-17 02:47:03.036346 7f5f2a66f780 10 mon.leningradskaya@-1(probing) e1 
cancel_probe_timeout (none scheduled)
2012-07-17 02:47:03.036383 7f5f2a66f780  0 mon.leningradskaya@-1(probing) e1  
my rank is now 1 (was -1)

continuing to log more besides as the cluster came back up.

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu Quantal)
 Importance: Undecided
 Status: New


** Tags: canonistack

** Also affects: ceph (Ubuntu Quantal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1026402

Title:
  mon cluster (no cephx) fails to start unless empty keyring files are
  created

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1026402/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 996233] Re: nova and python-novaclient disagree on volumes API URLs

2012-07-12 Thread Paul Collins
Apologies for the delay in replying. We recently completed our migration
to Keystone, and now nova volume-list works as expected.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/996233

Title:
  nova and python-novaclient disagree on  volumes API URLs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/996233/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 996233] Re: nova and python-novaclient disagree on volumes API URLs

2012-06-01 Thread Paul Collins
This Openstack installation is using deprecated auth, though, not
keystone. The following flags are in nova.conf:

--use_deprecated_auth
--auth_strategy=deprecated

I've only used Ubuntu packages on this machine — no devstack, no pip, no
setup.py.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/996233

Title:
  nova and python-novaclient disagree on  volumes API URLs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/996233/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 996233] Re: nova and python-novaclient disagree on volumes API URLs

2012-05-31 Thread Paul Collins
Looking at William's trace, I see some differences with the traces I
get. Not posting a full one in the first place was foolish of me.  Here
it is now.

$ nova --debug volume-list
connect: (XXX.XXX.XXX.XXX, 8774)
send: 'GET /v1.1 HTTP/1.1\r\nHost: XXX.XXX.XXX.XXX:8774\r\nx-auth-project-id: 
pjdc_project\r\naccept-encoding: gzip, deflate\r\nx-auth-user: 
pjdc\r\nuser-agent: python-novaclient\r\nx-auth-key: 
----\r\naccept: application/json\r\n\r\n'
reply: 'HTTP/1.1 204 No Content\r\n'
header: Content-Length: 0
header: X-Auth-Token: 
header: X-Server-Management-Url: http://XXX.XXX.XXX.XXX:8774/v1.1/pjdc_project
header: Content-Type: text/plain; charset=UTF-8
header: Date: Thu, 31 May 2012 21:19:01 GMT
send: 'GET /v1.1/pjdc_project/volumes/detail HTTP/1.1\r\nHost: 
XXX.XXX.XXX.XXX:8774\r\nx-auth-project-id: pjdc_project\r\nx-auth-token: 
\r\naccept-encoding: gzip, 
deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n'
reply: 'HTTP/1.1 404 Not Found\r\n'
header: Content-Length: 52
header: Content-Type: text/plain; charset=UTF-8
header: Date: Thu, 31 May 2012 21:19:02 GMT
DEBUG (shell:416) n/a (HTTP 404)
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 413, in main
OpenStackComputeShell().main(sys.argv[1:])
  File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 364, in main
args.func(self.cs, args)
  File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 858, 
in do_volume_list
volumes = cs.volumes.list()
  File /usr/lib/python2.7/dist-packages/novaclient/v1_1/volumes.py, line 79, 
in list
return self._list(/volumes/detail, volumes)
  File /usr/lib/python2.7/dist-packages/novaclient/base.py, line 71, in _list
resp, body = self.api.client.get(url)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 136, in get
return self._cs_request(url, 'GET', **kwargs)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 124, in 
_cs_request
**kwargs)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 107, in 
request
raise exceptions.from_response(resp, body)
NotFound: n/a (HTTP 404)
ERROR: n/a (HTTP 404)

Whereas in William's trace the token is obtained with POST
/v2.0/tokens and the list operation is performed with GET
/v1/5c9e830827e0412b92da25b128f5c63d/volumes/detail.

In the credentials packets we distribute to our Openstack users, we have
a file containing environment variables, which includes:

export NOVA_URL=http://XXX.XXX.XXX.XXX:8774/v1.1/;
export NOVA_VERSION=1.1

However, when I set things up as follows:

export NOVA_URL=http://XXX.XXX.XXX.XXX:8774/v2.0/;
export NOVA_VERSION=2

I get:

$ nova --debug volume-list
connect: (XXX.XXX.XXX.XXX, 8774)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 
XXX.XXX.XXX.XXX:8774\r\nContent-Length: 137\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\naccept: 
application/json\r\nuser-agent: python-novaclient\r\n\r\n{auth: 
{tenantName: pjdc_project, passwordCredentials: {username: pjdc, 
password: ----}}}'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Length: 141
header: Content-Type: application/json; charset=UTF-8
header: Date: Thu, 31 May 2012 21:26:01 GMT
DEBUG (shell:416) The server could not comply with the request since it is 
either malformed or otherwise incorrect. (HTTP 400)
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 413, in main
OpenStackComputeShell().main(sys.argv[1:])
  File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 358, in main
self.cs.authenticate()
  File /usr/lib/python2.7/dist-packages/novaclient/v1_1/client.py, line 106, 
in authenticate
self.client.authenticate()
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 229, in 
authenticate
auth_url = self._v2_auth(auth_url)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 284, in 
_v2_auth
self._authenticate(url, body)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 305, in 
_authenticate
resp, body = self.request(token_url, POST, body=body)
  File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 107, in 
request
raise exceptions.from_response(resp, body)
BadRequest: The server could not comply with the request since it is either 
malformed or otherwise incorrect. (HTTP 400)
ERROR: The server could not comply with the request since it is either 
malformed or otherwise incorrect. (HTTP 400)

Which seems to leave us with

=== the v1.1 issue ===

When using the v1.1 API, novaclient is not able to query Openstack for a
list of volumes. Is this supposed to be supported?

=== the v2 issue ===

The Openstack installation I'm testing against doesn't like the v2 API.
In nova-api.log I find:

2012-05-31 21:26:01 

[Bug 1001088] [NEW] iSCSI targets are not restored following a reboot

2012-05-17 Thread Paul Collins
Public bug reported:

When using nova-volume (with the flag --iscsi_helper=tgtadm, which is
the default value in Ubuntu) if the host is rebooted, the iSCSI targets
are not recreated.  This means that the compute hosts are unable to
reëstablish their iSCSI sessions, and volumes that were attached to
instances remain inaccessible.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

** Summary changed:

- iSCSi targets are not restored following a reboot
+ iSCSI targets are not restored following a reboot

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1001088

Title:
  iSCSI targets are not restored following a reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1001088/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2012-05-10 Thread Paul Collins
** Description changed:

  Hi,
  
  This is related to
  https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/959597 where
  upstream has removed process_name.rb in 2.7.11 but it is still packaged
  and provided by puppet-common.
+ 
+ [ This plugin frequently causes puppet to hang and requires manual
+ sysadmin intervention to resolve. -- pjdc, 2011-05-10 ]
  
  Source tarball for 2.7.10 from puppetlabs:
  
  [hloeung@darkon puppet-2.7.10]$ find . -type f -name '*process_name*'
  ./spec/unit/util/instrumentation/listeners/process_name_spec.rb
  ./lib/puppet/util/instrumentation/listeners/process_name.rb
  [hloeung@darkon puppet-2.7.10]$
  
- 
  Source tarball for 2.7.11 from puppetlabs:
  
  [hloeung@darkon puppet-2.7.11]$ find . -type f -name '*process_name*'
  [hloeung@darkon puppet-2.7.11]$
  
- 
- [hloeung@darkon puppet-2.7.10]$ dpkg-query -S 
/usr/lib/ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb 
   
+ [hloeung@darkon puppet-2.7.10]$ dpkg-query -S 
/usr/lib/ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb
  puppet-common: 
/usr/lib/ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb
  
  [hloeung@darkon puppet-2.7.10]$ dpkg -l puppet-common
- Desired=Unknown/Install/Remove/Purge/Hold 

   
- | 
Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend

   
- |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)

   
- ||/ Name   Version
Description 
   
- 
+++-==-==-
   
+ Desired=Unknown/Install/Remove/Purge/Hold
+ | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
+ |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
+ ||/ Name   Version
Description
+ 
+++-==-==-
  ii  puppet-common  2.7.11-1ubuntu2
Centralized configuration management

** Changed in: puppet (Ubuntu)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/995719

Title:
  process_name.rb removed in 2.7.11 but still provided by puppet-common

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/995719/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 996233] [NEW] nova and python-novaclient disagree on volumes API URLs

2012-05-07 Thread Paul Collins
Public bug reported:

I noticed the following (Ubuntu 12.04 LTS on the Nova cluster, Ubuntu
12.04 LTS on my machine):

$ nova volume-list
ERROR: n/a (HTTP 404)

Based on the output of nova --debug volume-list, it looks like python-
novaclient is expecting to be able to do GET
/v1.1/pjdc_project/volumes/detail.  When I manually constuct a request
as follows, using GET /v1.1/pjdc_project/os-volumes/detail, a
sensible-looking chunk of JSON is returned:

$ nc XXX.XXX.XXX.XXX    
   
GET /v1.1/pjdc_project/os-volumes/detail HTTP/1.1
Host: XXX.XXX.XXX.XXX:
x-auth-project-id: pjdc_project
x-auth-token: 
accept-encoding: gzip, deflate
accept: application/json
user-agent: python-novaclient

HTTP/1.1 200 OK
X-Compute-Request-Id: req-----
Content-Type: application/json
Content-Length: 345
Date: Mon, 07 May 2012 22:32:58 GMT

{volumes: [{status: in-use, displayDescription: null,
availabilityZone: nova, displayName: null, attachments:
[{device: /dev/vdc, serverId: ----
, id: 51, volumeId: 51}], volumeType: null,
snapshotId: null, size: 25, id: 51, createdAt: 2012-05-04
03:25:04, metadata: {}}]}

It would be helpful if this disconnect could be resolved and a fix
targeted to Precise.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: python-novaclient (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: python-novaclient (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/996233

Title:
  nova and python-novaclient disagree on  volumes API URLs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/996233/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2012-05-06 Thread Paul Collins
Looks like debian/patches/debian-changes is adding the file back.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/995719

Title:
  process_name.rb removed in 2.7.11 but still provided by puppet-common

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/995719/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 955510] [NEW] failed attach leaves stale iSCSI session on compute host

2012-03-14 Thread Paul Collins
Public bug reported:

Version: 2012.1~e4~20120217.12709-0ubuntu1

I attempted to attach an iSCSI volume to one of my instances.  This
failed because I specified /dev/vdb as the device, which was in use.
Any further attempts to attach the volume then also failed.  When I
inspected nova-compute.log, I discovered the following at the end
(full log attached):

(nova.rpc.common): TRACE: Stdout: 'Logging in to [iface: default, target: 
iqn.2010-10.org.openstack:volume-0009, portal: YY.YY.YY.15,3260]\n'
(nova.rpc.common): TRACE: Stderr: 'iscsiadm: Could not login to [iface: 
default, target: iqn.2010-10.org.openstack:volume-0009, portal: 
YY.YY.YY.15,3260]: \niscsiadm: initiator reported error (15 - already exists)\n'

I guessed from this that the previous failed attach had left the iSCSI
session up and that nova-compute wasn't able to deal with this.  I logged
into the compute node, removed it with iscsiadm --mode node
--targetname iqn.2010-10.org.openstack:volume-0009 --portal
YY.YY.YY.15:3260 --logout and was then able to attach the volume
to my instance.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/955510

Title:
  failed attach leaves stale iSCSI session on compute host

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/955510/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 955510] Re: failed attach leaves stale iSCSI session on compute host

2012-03-14 Thread Paul Collins
** Attachment added: failed-attach-stale-session.log
   
https://bugs.launchpad.net/bugs/955510/+attachment/2872500/+files/failed-attach-stale-session.log

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/955510

Title:
  failed attach leaves stale iSCSI session on compute host

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/955510/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 955510] Re: failed attach leaves stale iSCSI session on compute host

2012-03-14 Thread Paul Collins
I can no longer reproduce the problem with
2012.1~rc1~20120309.13261-0ubuntu1, so I reckon this is indeed fixed.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/955510

Title:
  failed attach leaves stale iSCSI session on compute host

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/955510/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 954692] [NEW] cannot detach volume from terminated instance

2012-03-13 Thread Paul Collins
Public bug reported:

Version: 2012.1~e4~20120217.12709-0ubuntu1

I attached a volume to an instance via iSCSI, shut down the instance,
and then attempted to detach the volume.  The result is the following in
nova-compute.log, and the volume remains in-use.  I also tried euca-
detach-volume --force with the same result.

VOLUME  vol-0009 10 novain-use (pjdc_project,
zucchini, i-1a05[ankaa], /dev/vdc)  2012-03-13T20:49:28Z

2012-03-14 03:04:08,486 DEBUG nova.rpc.common [-] received {u'_context_roles': 
[u'cloudadmin', u'netadmin', u'projectmanager', u'admin'], 
u'_context_request_id': u'req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3', 
u'_context_read_deleted': u'no', u'args': {u'instance_uuid': 
u'f7620968-686d-4a3d-a1b3-2d0881e1656d', u'volume_id': 9}, 
u'_context_auth_token': None, u'_context_strategy': u'noauth', 
u'_context_is_admin': True, u'_context_project_id': u'pjdc_project', 
u'_context_timestamp': u'2012-03-14T03:03:59.303517', u'_context_user_id': 
u'pjdc', u'method': u'detach_volume', u'_context_remote_address': 
u'XXX.XXX.XXX.XXX'} from (pid=8590) _safe_log 
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:144
2012-03-14 03:04:08,487 DEBUG nova.rpc.common 
[req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] unpacked context: 
{'request_id': u'req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3', 'user_id': u'pjdc', 
'roles': [u'cloudadmin', u'netadmin', u'projectmanager', u'admin'], 
'timestamp': '2012-03-14T03:03:59.303517', 'is_admin': True, 'auth_token': 
None, 'project_id': u'pjdc_project', 'remote_address': u'XXX.XXX.XXX.XXX', 
'read_deleted': u'no', 'strategy': u'noauth'} from (pid=8590) unpack_context 
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:186
2012-03-14 03:04:08,515 INFO nova.compute.manager 
[req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] 
check_instance_lock: decorating: |function detach_volume at 0x1f591b8|
2012-03-14 03:04:08,516 INFO nova.compute.manager 
[req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] 
check_instance_lock: arguments: |nova.compute.manager.ComputeManager object at 
0x1c6b1d0| |nova.rpc.amqp.RpcContext object at 0x4987a50| 
|f7620968-686d-4a3d-a1b3-2d0881e1656d|
2012-03-14 03:04:08,516 DEBUG nova.compute.manager 
[req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] instance 
f7620968-686d-4a3d-a1b3-2d0881e1656d: getting locked state from (pid=8590) 
get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1508
2012-03-14 03:04:08,668 ERROR nova.rpc.common [-] Exception during message 
handling
(nova.rpc.common): TRACE: Traceback (most recent call last):
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 250, in _process_data
(nova.rpc.common): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 112, in wrapped
(nova.rpc.common): TRACE: return f(*args, **kw)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 139, in 
decorated_function
(nova.rpc.common): TRACE: locked = self.get_lock(context, instance_uuid)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 112, in wrapped
(nova.rpc.common): TRACE: return f(*args, **kw)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 168, in 
decorated_function
(nova.rpc.common): TRACE: return function(self, context, instance_uuid, 
*args, **kwargs)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1509, in 
get_lock
(nova.rpc.common): TRACE: instance_ref = 
self.db.instance_get_by_uuid(context, instance_uuid)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 586, in 
instance_get_by_uuid
(nova.rpc.common): TRACE: return IMPL.instance_get_by_uuid(context, uuid)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 119, in 
wrapper
(nova.rpc.common): TRACE: return f(*args, **kwargs)
(nova.rpc.common): TRACE:   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 1452, in 
instance_get_by_uuid
(nova.rpc.common): TRACE: raise exception.InstanceNotFound(instance_id=uuid)
(nova.rpc.common): TRACE: InstanceNotFound: Instance 
f7620968-686d-4a3d-a1b3-2d0881e1656d could not be found.
(nova.rpc.common): TRACE:

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/954692

Title:
  cannot detach volume from terminated instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/954692/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com