[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances

2013-09-25 Thread Michael Still
The bug for the windows timezone issues is 1231254.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1026621

Title:
  nova-network gets release_fixed_ip events from someplace, but the
  database still keeps them associated with instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1026621/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances

2013-09-25 Thread Michael Still
Hi! There seem to be two issues here in the one bug -- DHCP refreshes
are causing problems, and pain with windows instances because of how
they handle timezones and that causing extra DHCP refreshes. I'll
talk about each of those separately.

Windows timezones
=

I'm sorry to hear you're having pain with Windows instances.
Unfortunately I don't know a lot about windows, but it sounds as though
if we could change the bios timezone for Windows instances to the
timezone of the compute node that would alleviate your issue. Is that
correct?

Looking at the code, each instance has an os_type field, which is
inherited from the image the instance is based off (there is an image
property called os_type). Xen expects this value to be "windows" for
windows instances. So, libvirt could do the same thing here and treat
the timezone for instances with os_type == "windows" differently.

I'm going to create a new bug to track this windows issue, so that I
can leave this bug focussed on the DHCP timeout issue. This isn't
because Windows is less important as a guest, but because I don't want
it to be missed in the DHCP discussion that's happening here.

DHCP refreshes
==

It also sounds as if we could bear in increase the default DHCP interval,
although you can configure that yourself as well. It might also make
sense to ping an IP before we delete its lease.

Cheers,
Michael

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1026621

Title:
  nova-network gets release_fixed_ip events from someplace, but the
  database still keeps them associated with instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1026621/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances

2013-09-25 Thread Michael Still
** Tags added: libvirt

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1026621

Title:
  nova-network gets release_fixed_ip events from someplace, but the
  database still keeps them associated with instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1026621/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 832507] Re: console.log grows indefinitely

2013-09-20 Thread Michael Still
** Changed in: nova
 Assignee: Michael Still (mikalstill) => (unassigned)

** Changed in: nova
   Status: In Progress => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/832507

Title:
  console.log grows indefinitely

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/832507/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 832507] Re: console.log grows indefinitely

2013-09-20 Thread Michael Still
** Changed in: nova
   Status: In Progress => Triaged

** Changed in: nova
 Assignee: Michael Still (mikalstill) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/832507

Title:
  console.log grows indefinitely

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/832507/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1190086] Re: empty console log output with grizzly on centOS distribution

2013-08-04 Thread Michael Still
Does the console log file exist at all?

** Summary changed:

- empty console log output with grizzley on centOS distribution
+ empty console log output with grizzly on centOS distribution

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1190086

Title:
  empty console log output with grizzly on centOS distribution

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1190086/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 832507] Re: console.log grows indefinitely

2013-07-29 Thread Michael Still
** Changed in: nova
 Assignee: (unassigned) => Michael Still (mikalstill)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/832507

Title:
  console.log grows indefinitely

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/832507/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1182624] Re: Uncached instance builds fail with non-zero root disk sizes

2013-05-21 Thread Michael Still
** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1182624

Title:
  Uncached instance builds fail with non-zero root disk sizes

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1182624/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1155458] Re: 500 response when trying to create a server from a deleted image

2013-04-07 Thread Michael Still
@Matthew -- are you still working on this one?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1155458

Title:
  500 response when trying to create a server from a deleted image

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1155458/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1158563] Re: After grizzly upgrade, EC2 API requests fail:Could not find: credential

2013-04-04 Thread Michael Still
@Adam -- can we therefore remove the upstream tasks from this bug?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1158563

Title:
  After grizzly upgrade, EC2 API requests fail:Could not find:
  credential

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1158563/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1158563] Re: After grizzly upgrade, EC2 API requests fail:Could not find: credential

2013-04-02 Thread Michael Still
** Tags added: ec2

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1158563

Title:
  After grizzly upgrade, EC2 API requests fail:Could not find:
  credential

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1158563/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1155458] Re: 500 response when trying to create a server from a deleted image

2013-03-15 Thread Michael Still
** Changed in: nova
 Assignee: (unassigned) => Matthew Sherborne (msherborne+openstack)

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1155458

Title:
  500 response when trying to create a server from a deleted image

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1155458/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 833519] Re: lxc in nova will happily attempt to run x86_64 container on i686 arch

2013-03-11 Thread Michael Still
Closing from lack of activity.

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/833519

Title:
  lxc in nova will happily attempt to run x86_64 container on i686 arch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/833519/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1028718] Re: nova volumes are inappropriately clingy for ceph and similar drivers

2013-03-11 Thread Michael Still
nova-volumes is gone now, so this is just a cinder bug.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1028718

Title:
  nova volumes are inappropriately clingy for ceph and similar drivers

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1028718/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 861504] Re: nova-compute-lxc limited by available nbd devices to 16 instances

2012-12-11 Thread Michael Still
Yes, if there are more device files than that they will now be used as
well.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/861504

Title:
  nova-compute-lxc limited by available nbd devices to 16 instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/861504/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 832507] Re: console.log grows indefinitely

2012-11-20 Thread Michael Still
It certainly seems like we should only send the last N lines of the
console to the user (although that might be computationally expensive to
generate on such a large file). That's a separate bug though I suspect.
I've filed bug 1081436 for that.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/832507

Title:
  console.log grows indefinitely

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/832507/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1000710] Re: Attaching volume during instance boot doesn't work

2012-11-15 Thread Michael Still
What release was this canonistack region running at the time the problem
was seen?

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1000710

Title:
  Attaching volume during instance boot doesn't work

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1000710/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1062314] Re: do_refresh_security_group_rules in nova.virt.firewall is very slow

2012-10-16 Thread Michael Still
** Changed in: nova
   Importance: Undecided => High

** Changed in: nova/essex
   Importance: Undecided => High

** Changed in: nova/essex
   Importance: High => Medium

** Changed in: nova/folsom
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1062314

Title:
  do_refresh_security_group_rules in nova.virt.firewall is very slow

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1062314/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1062314] Re: do_refresh_security_group_rules in nova.virt.firewall is very slow

2012-10-12 Thread Michael Still
Upstream has chosen not to backport this fix to essex. Can we please
consider carrying this patch ourselves?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1062314

Title:
  do_refresh_security_group_rules in nova.virt.firewall is very slow

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1062314/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1062314] Re: do_refresh_security_group_rules in nova.virt.firewall is very slow

2012-10-10 Thread Michael Still
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Michael Still (mikalstill)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1062314

Title:
  do_refresh_security_group_rules in nova.virt.firewall is very slow

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1062314/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1062314] Re: do_refresh_security_group_rules in nova.virt.firewall is very slow

2012-10-05 Thread Michael Still
I think the issue here is that nova.virt.firewall.py
IptablesFirewallDriver.instance_rules() is calling
get_instance_nw_info() which is causing rpcs to be fired off
_while_still_holding_the_iptables_lock. I suspect that the rpcs need to
happen outside the lock.

>From yet more instrumented code:

A synchronous RPC call is being made while a lock is held. This is probably a 
bug. Please report it. Include lines following this that start with ** please.
** multicall
** call
** call
** call
** get_instance_nw_info
** instance_rules
** add_filters_for_instance
** do_refresh_security_group_rules
** inner_while_holding_lock
** refresh_security_group_members
** refresh_security_group_members
** refresh_security_group_members
** wrapped
** _process_data
** wrapped
** _spawn_n_impl
** end of stack trace

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1062314

Title:
  do_refresh_security_group_rules in nova.virt.firewall is very slow

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1062314/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1062314] [NEW] do_refresh_security_group_rules in nova.virt.firewall is very slow

2012-10-05 Thread Michael Still
Public bug reported:

This is a bug against stable essex. I have made no attempt to determine
if this is still a problem in Folsom at this stage.

During a sprint this week we took a nova region which was previously
relatively idle and started turning up large numbers of instances using
juju. We started to experience very slow instance starts, which I dug
into. I should note that juju seems to trigger this behaviour by
refreshing security groups when ports are exposed, but other openstack
users will probably experience problems if they are trying to do non-
trivial things with security groups.

It appears that do_refresh_security_group_rules can sometimes take a
very long time to run, and it holds the "iptables" lock while doing
this. This is a problem because launching a new instance needs to take
the iptables lock, and can end up being blocked. An example slow
instance start (nova logs editted for clarity):

(logs from scheduler node)
2012-10-05 08:06:28 run_instance
2012-10-05 08:06:29 cast to <>

(logs from compute node)
2012-10-05 08:07:21 Starting instance...
2012-10-05 08:07:34 Starting toXML method
2012-10-05 08:07:43 Finished toXML method
2012-10-05 08:07:43 Called setup_basic_filtering in nwfilter
2012-10-05 08:07:43 Ensuring static filters

2012-10-05 08:08:48 Attempting to grab semaphore "iptables" for method 
"_do_refresh_provider_fw_rules"...
2012-10-05 08:24:00 Got semaphore "iptables" for method 
"_do_refresh_provider_fw_rules"...

2012-10-05 08:24:01 Creating image
2012-10-05 08:24:06 Instance is running
2012-10-05 08:25:28 Checking state
2012-10-05 08:25:30 Instance spawned successfully.

I instrumented utils.synchronized to include lock wait and help times
like this (patch against essex):

diff --git a/nova/utils.py b/nova/utils.py
index 6535b06..2e01a15 100644
--- a/nova/utils.py
+++ b/nova/utils.py
@@ -926,10 +926,16 @@ def synchronized(name, external=False):
 LOG.debug(_('Attempting to grab semaphore "%(lock)s" for method '
 '"%(method)s"...') % {'lock': name,
   'method': f.__name__})
+started_waiting = time.time()
+
 with sem:
 LOG.debug(_('Got semaphore "%(lock)s" for method '
-'"%(method)s"...') % {'lock': name,
-  'method': f.__name__})
+'"%(method)s" after %(wait)f second wait...'),
+  {'lock': name,
+   'method': f.__name__,
+   'wait': time.time() - started_waiting})
+started_working = time.time()
+
 if external and not FLAGS.disable_process_locking:
 LOG.debug(_('Attempting to grab file lock "%(lock)s" for '
 'method "%(method)s"...') %
@@ -945,6 +951,12 @@ def synchronized(name, external=False):
 else:
 retval = f(*args, **kwargs)
 
+LOG.debug(_('Released semaphore "%(lock)s" for method '
+'"%(method)s" after %(wait)f seconds of use...'),
+  {'lock': name,
+   'method': f.__name__,
+   'wait': time.time() - started_working})
+
 # If no-one else is waiting for it, delete it.
 # See note about possible raciness above.
 if not sem.balance < 1:

Taking a look at the five longest lock holds in my logs after this patch
is applied, I get:

# grep "Released semaphore" /var/log/nova/nova-compute.log | grep iptables | 
awk '{print$15, $13}' | sort -n | tail -5
192.134270 "do_refresh_security_group_rules"
194.140478 "do_refresh_security_group_rules"
194.153729 "do_refresh_security_group_rules"
201.135854 "do_refresh_security_group_rules"
297.725837 "do_refresh_security_group_rules"

So I then instrumented do_refresh_security_group_rules to try and see
what was slow. I used this patch (which I know is horrible):

diff --git a/nova/virt/firewall.py b/nova/virt/firewall.py
index f0f1594..99f580a 100644
--- a/nova/virt/firewall.py
+++ b/nova/virt/firewall.py
@@ -17,6 +17,8 @@
 #License for the specific language governing permissions and limitations
 #under the License.
 
+import time
+
 from nova import context
 from nova import db
 from nova import flags
@@ -167,16 +169,35 @@ class IptablesFirewallDriver(FirewallDriver):
 self.iptables.ipv6['filter'].add_rule(chain_name, rule)
 
 def add_filters_for_instance(self, instance):
+start_time = time.time()
 network_info = self.network_infos[instance['id']]
+LOG.debug(_('Get network info took %f seconds'),
+  time.time() - start_time)
+
+start_time = time.time()
 chain_name = self._instance_chain_name(instance)
+LOG.debug(_('Get chain name took %f seconds'),
+  time.time() - start_time)
+
+star

[Bug 1059899] Re: nova fails to configure dnsmasq, resulting in DNS timeouts in instances

2012-10-02 Thread Michael Still
** Tags added: ops

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1059899

Title:
  nova fails to configure dnsmasq, resulting in DNS timeouts in
  instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1059899/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1036919] [NEW] Region drop down showing incorrect region

2012-08-14 Thread Michael Still
Public bug reported:

Hi. We have two regions configured in /etc/openstack-
dashboard/local_settings.py.

A user changed regions with the drop down, logged into the new region,
and started an instance. The instance started in the _previous_ region.

I'm not sure what debugging information to provide here, as I didn't see
anything obvious in the logs.

** Affects: horizon (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1036919

Title:
  Region drop down showing incorrect region

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036919/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1036918] Re: Switching between regions causes login form to appear at the bottom of the page

2012-08-14 Thread Michael Still
** Attachment added: "login-bug.png"
   
https://bugs.launchpad.net/bugs/1036918/+attachment/3261487/+files/login-bug.png

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1036918

Title:
  Switching between regions causes login form to appear at the bottom of
  the page

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036918/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1036918] [NEW] Switching between regions causes login form to appear at the bottom of the page

2012-08-14 Thread Michael Still
Public bug reported:

I have to regions configured in /etc/openstack-
dashboard/local_settings.py. If I switch between them in the drop down
at the top right of the screen, a login dialog appears at the bottom of
the page which is quite confusing. Some thoughts:

 - credentials from the previous region should be retried in the new region. 
They might be the same.
 - the login form should be a "popup" as if the page is long you don't notice 
it appearing at the bottom of the screen.

I'll attach a screenshot to this bug report.

** Affects: horizon (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1036918

Title:
  Switching between regions causes login form to appear at the bottom of
  the page

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036918/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1020313] Re: openstack-dashboard hijacks the web root

2012-08-14 Thread Michael Still
Another option would be to create a vhost for the dashboard.

** Tags added: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1020313

Title:
  openstack-dashboard hijacks the web root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1020313/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1018244] Re: When keystone is enabled, the ec2 API returns uuids instead of tenant names

2012-08-10 Thread Michael Still
Looking at the ec2 api code, this is pretty consistent for all these
calls -- you'll get the uuid (with keystone) or the project id (without
keystone) in all cases. This is consistent with the ec2 api
specification, which says this field should be:

"ownerId The ID of the AWS account that owns the reservation."

They examples have large numbers as values, so this isn't meant to be a
human readable value. That's unfortunate, given euca2ools doesn't try to
do the lookup to turn this into one.

I'm going to close this bug as invalid though, as it is what the spec
intends for this call.

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1018244

Title:
  When keystone is enabled, the ec2 API returns uuids instead of tenant
  names

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1018244/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 904532] Re: Provide a script to automate cleanup of _base

2012-08-10 Thread Michael Still
Bolke -- that's not currently the case. If you want this functionality
you should file a separate bug for it. However, with a shared instances
directory you're best off disabling the cache manager entirely at the
moment.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/904532

Title:
  Provide a script to automate cleanup of _base

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/904532/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1019913] Re: Lazy load of attribute fails for instance_type.rxtx_factor

2012-07-02 Thread Michael Still
We have now observed this error on two testing clusters, so I don't
think this is because we're running precise-proposed in one any more.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1019913

Title:
  Lazy load of attribute fails for instance_type.rxtx_factor

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1019913/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1019913] [NEW] Lazy load of attribute fails for instance_type.rxtx_factor

2012-07-01 Thread Michael Still
Public bug reported:

Running proposed on one of our clusters, I see the following with
instances started via juju. I have been unable to re-create the problem
with raw ec2 commands.

[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Ensuring static filters
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Instance failed to spawn
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Traceback (most recent call 
last):
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 598, in _spawn
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 
self._legacy_nw_info(network_info), block_device_info)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return f(*args, **kw)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 921, 
in spawn
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 
self.firewall_driver.prepare_instance_filter(instance, network_info)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/firewall.py", line 136, in 
prepare_instance_filter
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 
self.add_filters_for_instance(instance)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/firewall.py", line 178, in 
add_filters_for_instance
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] ipv4_rules, ipv6_rules = 
self.instance_rules(instance, network_info)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/firewall.py", line 335, in 
instance_rules
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] instance)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/network/api.py", line 213, in 
get_instance_nw_info
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 'rxtx_factor': 
instance['instance_type']['rxtx_factor'],
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/models.py", line 75, in 
__getitem__
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return getattr(self, key)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 168, in 
__get__
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return 
self.impl.get(instance_state(instance),dict_)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 453, in 
get
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] value = 
self.callable_(state, passive)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/strategies.py", line 485, in 
_load_for_state
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 
(mapperutil.state_str(state), key)
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] DetachedInstanceError: Parent 
instance  is not bound to a Session; lazy load operation 
of attribute 'instance_type' cannot proceed
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0]

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1019913

Title:
  Lazy load of attribute fails for instance_type.rxtx_factor

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1019913/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1018244] [NEW] When keystone is enabled, the ec2 API returns uuids instead of tenant names

2012-06-26 Thread Michael Still
Public bug reported:

Before we turned on keystone, euca-describe-instances used to include
the names of user's projects in its output. It now lists the uuid of the
tenant instead, which isn't super helpful when trying to work out who
owns what. Can we please translate this back to a human readable name?

An example:

RESERVATION r-x2tdg0ga  c519923c921a404c96ebc8210a4ec67a
juju-canonistack2, juju-canonistack2-2
INSTANCEi-0083  ami-00bfserver-131  server-131  
running None (BANANAc921a404c96ebc8210a4ec67a, alce)0   m1.small
2012-06-27T04:12:42.000Znova

BANANAc921a404c96ebc8210a4ec67a is the UUID of a tenant.

** Affects: keystone (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonistack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to keystone in Ubuntu.
https://bugs.launchpad.net/bugs/1018244

Title:
  When keystone is enabled, the ec2 API returns uuids instead of tenant
  names

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1018244/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 915971] [NEW] New command "guestmount"

2012-01-13 Thread Michael Still
Public bug reported:

The latest nova-compute adds a new command which needs sudo privs:

2012-01-13 22:24:27,385 DEBUG nova.utils [-] Running cmd (subprocess):
sudo guestmount --rw -a /var/lib/nova/instances/instance-000c/disk
-m /dev/sda1 /tmp/tmpXrLkev from (pid=2743) execute /data/backups-
meta/linux/x220/data/src/openstack/nova/nova/utils.py:201

This command is not set up in /etc/sudoers.d/nova_sudoers, and therefore
my test instance prompts for the password. Interestingly I can't find
the command guestmount packaged anywhere, so I'm not sure what its meant
to be doing.

The code to call guestmount seems to have been around for a while, so
I'm not sure why this is only just coming up now. Perhaps I only just
noticed it.

I see that guestmount is packaged for precise, but not for oneiric.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/915971

Title:
  New command "guestmount"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/915971/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 915977] [NEW] Add policy.json to packages

2012-01-13 Thread Michael Still
Public bug reported:

Nova now requires a policy.json file in /etc/nova/.

Update the packages to install this file which is in the source tree.

(This is an attempt to move https://bugs.launchpad.net/nova/+bug/915614
to the right place).

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/915977

Title:
  Add policy.json to packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/915977/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 904532] Re: Provide a script to automate cleanup of _base

2012-01-09 Thread Michael Still
I have just send a patch for review which implements the _base cleanup
aspects of the blueprint. Its integrated into the nova compute manager,
as opposed to being a separate script.

https://review.openstack.org/#change,2902

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/904532

Title:
  Provide a script to automate cleanup of _base

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/904532/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 904532] [NEW] Provide a script to automate cleanup of _base

2011-12-14 Thread Michael Still
Public bug reported:

The nova base instance directory $instances_path/_base is never cleaned
up. This caused one of my compute nodes to run out of disk recently,
even though a bunch of the images there were no longer in use. There
appear to be homebrew cleanup scripts online, such as
https://github.com/Razique/BashStuff/blob/master/SCR_5008_V00_NUAC-
OPENSTACK-Nova-compute-images-prunning.sh . Please provide a script to
perform cleanup in the package.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/904532

Title:
  Provide a script to automate cleanup of _base

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/904532/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs