[Bug 1557345] Re: xenial juju 1.25.3 unable to deploy to lxc containers

2016-03-15 Thread Brad Marshall
Note these are freshly bootstrapped clouds, as per an irc conversation
with alexisb and anastasiamac_.

I took a working juju environment deploying to canonistack, just changed the 
default series, did a juju bootstrap and then a juju 
deploy local:xenial/ubuntu --to lxc:0, and got an error as above.

I also confirmed that the bootstrap node had lxc installed when I first
could log into it, before trying to deploy LXC to it:

dpkubuntu@juju-bradm-lcy01-machine-0:~$ dpkg --list | grep lxc
ii  liblxc1  2.0.0~rc10-0ubuntu2 amd64  
  Linux Containers userspace tools (library)
ii  lxc  2.0.0~rc10-0ubuntu2 all
  Transitional package for lxc1
ii  lxc-common   2.0.0~rc10-0ubuntu2 amd64  
  Linux Containers userspace tools (common tools)
ii  lxc-templates2.0.0~rc10-0ubuntu2 amd64  
  Linux Containers userspace tools (templates)
ii  lxc1 2.0.0~rc10-0ubuntu2 amd64  
  Linux Containers userspace tools
ii  lxcfs2.0.0~rc5-0ubuntu1  amd64  
  FUSE based filesystem for LXC
ii  python3-lxc  2.0.0~rc10-0ubuntu2 amd64  
  Linux Containers userspace tools (Python 3.x bindings)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1557345

Title:
  xenial juju 1.25.3 unable to deploy to lxc containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1557345/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1557345] Re: xenial juju 1.25.3 unable to deploy to lxc containers

2016-03-15 Thread Brad Marshall
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1557345

Title:
  xenial juju 1.25.3 unable to deploy to lxc containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1557345/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1557345] [NEW] xenial juju 1.25.3 unable to deploy to lxc containers

2016-03-15 Thread Brad Marshall
Public bug reported:

There appears to be some issue with deploying to lxc containers using
juju 1.25.3 on Xenial.

When deploying with xenial to canonistack-lcy02:

bradm@serenity:~/src/juju$ juju deploy local:xenial/ubuntu --to lxc:0
Added charm "local:xenial/ubuntu-2" to the environment.
ERROR adding new machine to host unit "ubuntu/1": cannot add a new machine: 
machine 0 cannot host lxc containers

When deploying with trusty to canonistack-lcy02, the only change I made
was to change the default-series from xenial to trusty:

bradm@serenity:~/src/juju$ juju deploy local:trusty/ubuntu --to lxc:0
Added charm "local:trusty/ubuntu-1" to the environment.

Versions used to deploy:
$ juju --version
1.25.3-xenial-amd64

$ lsb_release -rd
Description:Ubuntu Xenial Xerus (development branch)
Release:16.04

Please let me know if you need any further information.

** Affects: juju-core (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1557345

Title:
  xenial juju 1.25.3 unable to deploy to lxc containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1557345/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1524635] [NEW] haproxy syslog configuration causes double logging

2015-12-09 Thread Brad Marshall
Public bug reported:

The current rsyslogd configuration as provided by the rsyslogd package
causes double logging to occur.

Steps to Reproduce:
1) Install haproxy via whatever normal means (apt-get etc)
2) Configure it to listen on at least one port, even just the stats port
3) Visit the URL configured

You'll see logs generated in both /var/log/syslog (via
/etc/rsyslogd.d/50-default.conf) and /var/log/haproxy.log (via
/etc/rsyslog.d/haproxy.conf).

Steps to fix:
1) mv /etc/rsyslog.d/haproxy.conf 49-haproxy.conf  # This could be any number 
less than 50, to have it read before the default.conf
2) Restart rsyslog.
3) Access the provided service

This will cause the entries to be written out to only
/var/log/haproxy.log.

The testing was done on a Ubuntu 14.04 server (trusty) with haproxy
1.4.24-2ubuntu0.3 installed:

$ lsb_release -rd
Description:Ubuntu 14.04.3 LTS
Release:14.04

$ dpkg-query -W haproxy
haproxy 1.4.24-2ubuntu0.3

Please let me know if you have any further questions.

** Affects: haproxy (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1524635

Title:
  haproxy syslog configuration causes double logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1524635/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1515463] Re: Broken juju LXC deployments

2015-11-26 Thread Brad Marshall
This does indeed appear to work correctly, I've deployed a container
using juju:

ubuntu@apollo:~$ dpkg-query -W lxc
lxc 1.0.8-0ubuntu0.3

ubuntu@apollo:~$ sudo lxc-ls --fancy
NAME  STATEIPV4IPV6  AUTOSTART  
--
juju-machine-0-lxc-0  RUNNING  x.y.z.171  - YES
juju-trusty-lxc-template  STOPPED  -   - NO 

ubuntu@apollo:~$ sudo lxc-attach -n juju-machine-0-lxc-0
root@juju-machine-0-lxc-0:~# 

Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1515463

Title:
  Broken juju LXC deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1515463/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1515463] Re: Broken juju LXC deployments

2015-11-11 Thread Brad Marshall
FWIW and a totally expected result, I just downgraded the LXC packages
on these hosts and redeployed, and things came up ok.

$ dpkg-query -W lxc
lxc 1.0.7-0ubuntu0.10

I don't think this changes anything, but just putting it here for
completeness.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1515463

Title:
  Broken juju LXC deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1515463/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1515463] [NEW] Broken juju LXC deployments

2015-11-11 Thread Brad Marshall
Public bug reported:

I've just tried using juju to deploy to a container with trusty-proposed
repo enabled, and I get an error message about 'failed to retrieve the
template to clone'.  The underlying error appears to be:

  tar --numeric-owner -xpJf 
/var/cache/lxc/cloud-trusty/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz;
  xz: (stdin): File format not recognized; tar: Child returned status 1; tar:
  Error is not recoverable: exiting now;

This seems to be fairly obvious, trying to use xz on a tar.gz file is
never going to work.

The change appears to be from
https://github.com/lxc/lxc/commit/27c278a76931bfc4660caa85d1942ca91c86e0bf,
it assumes everything passed into it will be a .tar.xz file.

This appears to be a conflict between the template expecting a .tar.xz
file, and juju providing it a .tar.gz file.  You can see what juju is
providing from:

  $ ubuntu-cloudimg-query trusty released amd64 --format %{url}
  
https://cloud-images.ubuntu.com/server/releases/trusty/release-20151105/ubuntu-14.04-server-cloudimg-amd64.tar.gz

>From the juju deployed host:
$ apt-cache policy lxc-templates
lxc-templates:
  Installed: 1.0.8-0ubuntu0.1
  Candidate: 1.0.8-0ubuntu0.1
  Version table:
 *** 1.0.8-0ubuntu0.1 0
500 http://archive.ubuntu.com/ubuntu/ trusty-proposed/main amd64 
Packages
100 /var/lib/dpkg/status

>From the host running juju:
$ apt-cache policy juju-core
juju-core:
  Installed: 1.22.8-0ubuntu1~14.04.1
  Candidate: 1.25.0-0ubuntu1~14.04.1~juju1
  Version table:
 1.25.0-0ubuntu1~14.04.1~juju1 0
500 http://ppa.launchpad.net/juju/proposed/ubuntu/ trusty/main amd64 
Packages
 *** 1.22.8-0ubuntu1~14.04.1 0
400 http://archive.ubuntu.com/ubuntu/ trusty-proposed/universe amd64 
Packages
100 /var/lib/dpkg/status

All machine involved are running trusty:

$ lsb_release -rd
Description:Ubuntu 14.04.3 LTS
Release:14.04

Please let me know if you need any more information.

** Affects: lxc (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1515463

Title:
  Broken juju LXC deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1515463/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1507471] [NEW] nagios3 crashes with livestatus and downtimed checks

2015-10-19 Thread Brad Marshall
Public bug reported:

Issue
---
When nagios3 is configured to have livestatus from check-mk-livestatus as a 
broker module, and checks have a downtime applied to them it will crash when 
the logs rotate.  This shows up in /var/log/nagios3/nagios.log as:

   [1445238000] Caught SIGSEGV, shutting down...

Steps to reproduce
-
* Install nagios3 and check-mk-livestatus

* Edit /etc/nagios3/nagios.cfg to enable livestatus:

broker_module=/usr/lib/check_mk/livestatus.o
/var/lib/nagios3/livestatus/socket

* To speed up testing, edit /etc/nagios3/nagios.cfg to set:

log_rotation_method=h

This will cause log rotation to occur hourly, rather than weekly or
daily.

* Restart nagios to apply these fixes.

* Apply a downtime on any host or service to last until the top of the
next hour

* Wait until that time, and see that nagios crashes with the SIGSEGV
error.

Solution
-
After some searching around I found a patch at 
http://lists.mathias-kettner.de/pipermail/checkmk-en/2013-December/011087.html 
which I have applied to nagios source and thrown up into a PPA at 
ppa:brad-marshall/nagios for testing.  Reproducing the steps above but with the 
upgraded package means it no longer crashes.

The testing was done with latest patched Trusty, specifically:

$ lsb_release -d
Description:Ubuntu 14.04.3 LTS

$ dpkg-query -W nagios3
nagios3 3.5.1-1ubuntu1

$ dpkg-query -W check-mk-livestatus
check-mk-livestatus 1.2.2p3-1

Please let us know if you need any further information, or any testing
done.

** Affects: nagios3 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nagios3 in Ubuntu.
https://bugs.launchpad.net/bugs/1507471

Title:
  nagios3 crashes with livestatus and downtimed checks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nagios3/+bug/1507471/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1356392] Re: lacks sw raid1 install support

2015-05-13 Thread Brad Marshall
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to curtin in Ubuntu.
https://bugs.launchpad.net/bugs/1356392

Title:
  lacks sw raid1 install support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1356392/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1432493] [NEW] ntp dhcp hook doesn't check if ntp.conf has been updated

2015-03-15 Thread Brad Marshall
Public bug reported:

/etc/dhcp/dhclient-exit-hooks.d/ntp doesn't check if /etc/ntp.conf has
been updated since the last time dhclient ran.  A simple addition of a
check to see if /etc/ntp.conf is newer than /var/lib/ntp/ntp.conf.dhcp,
and if so letting it add the servers would be sufficient.

This is occuring on a Trusty host with ntp version 1:4.2.6.p5+dfsg-
3ubuntu2.14.04.2.

Please let me know if you need any further information.

$ dpkg-query -W ntp
ntp 1:4.2.6.p5+dfsg-3ubuntu2.14.04.2
$ lsb_release  -d
Description:Ubuntu 14.04.2 LTS

** Affects: ntp (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ntp in Ubuntu.
https://bugs.launchpad.net/bugs/1432493

Title:
  ntp dhcp hook doesn't check if ntp.conf has been updated

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1432493/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1421075] [NEW] Typo with dynamically removing port forward in ssh

2015-02-11 Thread Brad Marshall
Public bug reported:

When using ssh and managing ssh port forwards with ~C to remove a
forward that doesn't exist, the following occurs:

  user@host:~$ 
  ssh> -KD12345
  Unkown port forwarding.

ie, the mispelling of the work Unknown as 'Unkown'.

This occurs at least on a server running on trusty:

$ dpkg --list | grep openssh
ii  openssh-client  1:6.6p1-2ubuntu2   
amd64secure shell (SSH) client, for secure access to remote machines
ii  openssh-server  1:6.6p1-2ubuntu2   
amd64secure shell (SSH) server, for secure access from remote machines
ii  openssh-sftp-server 1:6.6p1-2ubuntu2   
amd64secure shell (SSH) sftp server module, for SFTP access from remote 
machines

Please let me know if you need any more information.

** Affects: openssh (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openssh in Ubuntu.
https://bugs.launchpad.net/bugs/1421075

Title:
  Typo with dynamically removing port forward in ssh

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1421075/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1379629] [NEW] cinder charm doesn't have nrpe-external-master interface

2014-10-09 Thread Brad Marshall
Public bug reported:

The cinder charm (and pretty much every other openstack charm) doesn't
provide a nrpe-external-master interface, making monitoring it awkward.
To fix this it simply needs the following added to the provides section
of the metadata.yaml:

  nrpe-external-master:
interface: nrpe-external-master
scope: container

I've got a branch at lp:~brad-
marshall/charms/trusty/cinder/add-n-e-m-interface with the change in it.

** Affects: swift (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: ceilometer (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: ceph (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: ceph-osd (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: cinder (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: glance (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: heat (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: keystone (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: mongodb (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: neutron-api (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: nova-cloud-controller (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: openstack-dashboard (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: percona-cluster (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: quantum-gateway (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: swift-storage (Juju Charms Collection)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Also affects: ceilometer (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: ceph (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: ceph-osd (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: heat (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: percona-cluster (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: quantum-gateway (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: swift (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: swift-storage (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: mongodb (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to swift in Ubuntu.
https://bugs.launchpad.net/bugs/1379629

Title:
  cinder charm doesn't have nrpe-external-master interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1379629/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1215282] [NEW] Possible puppet performance regression with 2.7.11-1ubuntu2.4

2013-08-21 Thread Brad Marshall
Public bug reported:

We appear to have a performance regression with puppet 2.7.11-1ubuntu2.4
that we recently upgraded to, particularly on our more heavily loaded
puppet master.  When we're running 2.4, many of our puppet clients get
the following:

err: Could not retrieve catalog from remote server: execution expired

and the load on the puppet master is higher than with 2.3.  When we
revert back to 2.3, the load is much lower (around 20 rather than around
40), and most of the puppet clients can retrieve the catalog without a
problem.

Is there any further information you need, or any debugging we can
assist with finding out what the issue is here?

OS Version: Ubuntu 12.04.3 LTS

$ dpkg-query -W puppet
puppet  2.7.11-1ubuntu2.4

** Affects: puppet (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/1215282

Title:
  Possible puppet performance regression with 2.7.11-1ubuntu2.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/1215282/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1028268] [NEW] Bareword dns domain makes facter return incorrect info

2012-07-23 Thread Brad Marshall
Public bug reported:

If you are using a bare word dns domain (.test for example), facter fqdn
returns the incorrect information.  Since there's no . in the domain,
the checks fall back to parsing /etc/resolv.conf, which may not be
correct.

$ hostname
eagle
$ dnsdomainname 
test
$ facter fqdn
eagle.example.com

$ cat /etc/resolv.conf 
search example.com
nameserver 192.168.1.1

If I edit resolv.conf to include domain test or include it as the first
entry in search, facter returns the right value:

$ cat /etc/resolv.conf 
domain test
search example.com
nameserver 192.168.1.1

$ facter fqdn
eagle.test

$ cat /etc/resolv.conf 
search test example.com
nameserver 192.168.1.1

$ facter fqdn
eagle.test

The version of facter I tested this on is:

$ dpkg-query -W facter
facter  1.6.5-1ubuntu1

And this is running on precise:

$ lsb_release -rd
Description:Ubuntu 12.04 LTS
Release:12.04

Please let us know if you need any more information.

** Affects: facter (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to facter in Ubuntu.
https://bugs.launchpad.net/bugs/1028268

Title:
  Bareword dns domain makes facter return incorrect info

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/facter/+bug/1028268/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 973953] Re: euca-describe-instance returns error VolumeNotFound

2012-04-09 Thread Brad Marshall
This appears to have been fixed after upgrading to Essex, so we can
close off this bug.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/973953

Title:
  euca-describe-instance returns error VolumeNotFound

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/973953/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 973953] [NEW] euca-describe-instance returns error VolumeNotFound

2012-04-04 Thread Brad Marshall
Public bug reported:

euca-describe-instances has been working fine up until recently.  We now
see:

$ euca-describe-instances 
VolumeNotFound: Volume vol-0019 could not be found.

The logs on the nova-api server are as follows:

2012-04-05 03:14:05 DEBUG nova.auth.manager [-] Looking up user: 
u'x-y-zzz' from (pid=8961) authenticate 
/usr/lib/python2.7/dist-packages/nova/auth/manager.py:298
2012-04-05 03:14:05 DEBUG nova.auth.manager [-] user: User('bradm', 'bradm') 
from (pid=8961) authenticate 
/usr/lib/python2.7/dist-packages/nova/auth/manager.py:300
2012-04-05 03:14:05 DEBUG nova.auth.signer [-] using _calc_signature_2 from 
(pid=8961) _calc_signature_2 
/usr/lib/python2.7/dist-packages/nova/auth/signer.py:139
2012-04-05 03:14:05 DEBUG nova.auth.signer [-] query string: 
AWSAccessKeyId=--y-%3Abradm_project&Action=DescribeInstances&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2012-04-05T03%3A14%3A04Z&Version=2010-08-31
 from (pid=8961) _calc_signature_2 
/usr/lib/python2.7/dist-packages/nova/auth/signer.py:163
2012-04-05 03:14:05 DEBUG nova.auth.signer [-] string_to_sign: POST
91.189.93.65:8773
/services/Cloud/
AWSAccessKeyId=www-xxx-yy-%3Abradm_project&Action=DescribeInstances&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2012-04-05T03%3A14%3A04Z&Version=2010-08-31
 from (pid=8961) _calc_signature_2 
/usr/lib/python2.7/dist-packages/nova/auth/signer.py:165
2012-04-05 03:14:05 DEBUG nova.auth.signer [-] len(b64)=44 from (pid=8961) 
_calc_signature_2 /usr/lib/python2.7/dist-packages/nova/auth/signer.py:168
2012-04-05 03:14:05 DEBUG nova.auth.signer [-] base64 encoded digest: 
abcdefghijklmnopoqrstuvwxyz= from (pid=8961) _calc_signature_2 
/usr/lib/python2.7/dist-packages/nova/auth/signer.py:169
2012-04-05 03:14:05 DEBUG nova.auth.manager [-] user.secret: 
ww-xxx-yyy-z from (pid=8961) authenticate 
/usr/lib/python2.7/dist-packages/nova/auth/manager.py:343
2012-04-05 03:14:05 DEBUG nova.auth.manager [-] expected_signature: 
XQG93vpuCzepE5ZFHtRtMt5ljb06UB8VZs3XNOcABgU= from (pid=8961) authenticate 
/usr/lib/python2.7/dist-packages/nova/auth/manager.py:344
2012-04-05 03:14:05 DEBUG nova.auth.manager [-] signature: 
XQG93vpuCzepE5ZFHtRtMt5ljb06UB8VZs3XNOcABgU= from (pid=8961) authenticate 
/usr/lib/python2.7/dist-packages/nova/auth/manager.py:345
2012-04-05 03:14:05 AUDIT nova.api.ec2 
[req-7e1a28db-b244-4788-9f09-2e1ae4ee8068 bradm bradm_project] Authenticated 
Request For bradm:bradm_project)
2012-04-05 03:14:05 DEBUG nova.api.ec2 
[req-7e1a28db-b244-4788-9f09-2e1ae4ee8068 bradm bradm_project] action: 
DescribeInstances from (pid=8961) __call__ 
/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py:435
2012-04-05 03:14:05 DEBUG nova.compute.api 
[req-7e1a28db-b244-4788-9f09-2e1ae4ee8068 bradm bradm_project] Searching by: 
{'deleted': False} from (pid=8961) get_all 
/usr/lib/python2.7/dist-packages/nova/compute/api.py:1010
2012-04-05 03:14:05 INFO nova.api.ec2 [req-7e1a28db-b244-4788-9f09-2e1ae4ee8068 
bradm bradm_project] VolumeNotFound raised: Volume 25 could not be found.
2012-04-05 03:14:05 ERROR nova.api.ec2 
[req-7e1a28db-b244-4788-9f09-2e1ae4ee8068 bradm bradm_project] VolumeNotFound: 
Volume vol-0019 could not be found.
2012-04-05 03:14:05 INFO nova.api.ec2 [req-7e1a28db-b244-4788-9f09-2e1ae4ee8068 
bradm bradm_project] 0.504622s 118.208.40.182 POST /services/Cloud/ 
CloudController:DescribeInstances 400 [Boto/2.2.2 (linux2)] 
application/x-www-form-urlencoded text/xml

Client tools version:
$ dpkg-query -W euca2ools
euca2ools   2.0.0~bzr516-0ubuntu3

Nova version:
$ dpkg-query -W nova-common
nova-common 2012.1~rc1~20120309.13261-0ubuntu1

Please let us know if there's anything else needed to diagnose whats
going on.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/973953

Title:
  euca-describe-instance returns error VolumeNotFound

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/973953/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 540747] Re: Apache Web DAV incorrect permissions

2011-10-03 Thread Brad Marshall
I can confirm this is still happening on lucid (10.04.3) with the
following apache versions:

$ dpkg --list | grep apache
ii  apache2 
2.2.14-5ubuntu8.6   Apache HTTP Server metapackage
ii  apache2-mpm-worker  
2.2.14-5ubuntu8.6   Apache HTTP Server - high speed 
threaded mod
ii  apache2-utils   
2.2.14-5ubuntu8.6   utility programs for webservers
ii  apache2.2-bin   
2.2.14-5ubuntu8.6   Apache HTTP Server common binary files
ii  apache2.2-common
2.2.14-5ubuntu8.6   Apache HTTP Server common files
ii  libapache2-mod-python   3.3.1-8ubuntu2  
Python-embedding module for Apache 2
ii  python-apache-openid2.0.1-0ubuntu1  
OpenID consumer module for Apache

Do you require any more information to debug this issue?  Or to get the
fix into lucid?

Brad.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to apache2 in Ubuntu.
https://bugs.launchpad.net/bugs/540747

Title:
  Apache Web DAV incorrect permissions

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/540747/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs