Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Jim Fehlig
Daniel P. Berrange wrote:
 On Fri, Apr 19, 2013 at 01:43:23PM +0300, Cristian Tomoiaga wrote:
   
 As for the compute part, I may need to work with libvirt but I want to
 avoid that if possible. Libxl was meant for stacks right ? Again, this may
 not be acceptable and I would like to know.
 

 Nova already has two drivers which support Xen, one using XenAPI and
 the other using libvirt. Libvirt itself will either use the legacy
 XenD/XenStore APIs, or on new enough Xen will use libxl.

 libxl is a pretty low level interface, not really targetted for direct
 application usage, but rather for building management APIs like libvirt
 or XCP. IMHO it would not really be appropriate for OpenStack to directly
 use libxl. Given that Nova already has two virt drivers which can work
 with Xen, I also don't really think there's a need to add a 3rd using
 libxl.
   

Absolutely agreed, we do not want a libxl nova virt driver :).

FYI, I have not tried the libvirt libxl driver on Xen compute nodes -
all of my nodes are running the legacy xend toolstack and thus using the
legacy libvirt xen driver.  (I plan to switch these nodes to the new
toolstack in the Xen 4.3 timeframe.)  That said, the libxl driver should
work on a Xen compute node running the libxl stack.  I still haven't
finished the migration patch for the libvirt libxl driver, so migration
between libxl Xen compute nodes is not possible.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Jim Fehlig
Cristian Tomoiaga wrote:
 Hi Jim,

 Thank you! I'll check libvirt in more detail to make sure nothing I
 need is missing. 
 With xend it should work. I'm planning ahead and want to deploy on
 Libxl but for the sake of argument I will probably use both KVM
 (Daniel is to blame here :) ) and Xen with libxl while I test out
 everything. It's a good thing to see interest in libvirt. For some
 reason I though that libvirt will move slower with new features
 (granted libxl has changed from 4.1 to 4.2). Also being bugged by
 this: https://wiki.openstack.org/wiki/LibvirtAPI

Nothing to be alarmed about IMO.  That simply provides info about some
of the many ongoing improvements and enhancements to the nova libvirt
driver, which is the most widely used driver btw, including in all the
CI gating.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] What is the most commonly used Hypervisor and toolset combination?

2012-08-24 Thread Jim Fehlig
Sorry for the delayed response.

Boris-Michel Deschenes wrote:
 That would be great Jim,

 I've built a cloud that uses CentOS+libvirt+Xen 4.1.3 to do GPU passthrough 
 and I just love to be able to use libvirt with Xen, this setup makes a lot of 
 sense to me since our main, bigger cloud is the standard libvirt+KVM, using 
 libvirt across the board is great for us.

 I'm following your work closely, the GPU cloud is still using libvirt+xend 
 but when I move to Xen 4.2 my understanding is that I will need libvirt+xl 
 (xenlight) so I guess there's still some work to be done in libvirt there...
   

Yes, there is.  libxl changed significantly between Xen 4.1 and soon to
be released Xen 4.2, so much so that the current libvirt libxl driver
won't even build against Xen 4.2.  In addition, the libxl driver does
not have feature parity with the legacy xen driver.  So lots of work to
be done, but I have limited free cycles.  I'm hoping to get another body
or two at SUSE to help with this work.

That said, xm/xend will still be included in Xen 4.2 and can be
configured as the primary tool stack, allowing you to continue using
your existing setup with Xen 4.2.

Regards,
Jim

 The reason I want to move to Xen 4.2 is the GPU passthrough of NVIDIA GPUs... 
 currently, with Xen 4.1.3, I successfully passthrough ATI GPUs only.

 Boris

 -Message d'origine-
 De : openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net 
 [mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net]
  De la part de Jim Fehlig
 Envoyé : 18 juillet 2012 17:56
 À : John Garbutt
 Cc : openstack@lists.launchpad.net
 Objet : Re: [Openstack] What is the most commonly used Hypervisor and toolset 
 combination?

 John Garbutt wrote:
   
 To my knowledge, if you want to use Xen, using XCP or XenServer (i.e. using 
 XenAPI driver) is the way to go. If you look at the contributions to the 
 drivers, you can have a good guess at who is using them.

 I know people are going into production on XenAPI, not heard about 
 Xen+libvirt in production. Having said this, I have seen some fixes to 
 Folsom around Xen + libvirt, I think from SUSE?
   
 

 Yes, I'm slowly working on improving support for xen.org Xen via the libvirt 
 driver and hope to have these improvements in for the Folsom release.

 Regards,
 Jim


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

   

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] What is the most commonly used Hypervisor and toolset combination?

2012-08-24 Thread Jim Fehlig
Boris-Michel Deschenes wrote:
 John,

 Sorry for my late response..

 It would be great to collaborate, like I said, I prefer to keep the libvirt 
 layer as it works great with openstack and many other techs (collectd, 
 virt-manager, etc.), the virsh tool is also very useful for us.

 You say:
 ---
 We have GPU passthrough working with NVIDIA GPUs in Xen 4.1.2, if I recall 
 correctly.  We don't yet have a stable Xen + Libvirt installation working, 
 but we're looking at it.  Perhaps it would be worth collaborating since it 
 sounds like this could be a win for both of us.
 ---
 I have Jim Fehlig in CC since this could be of interest to him.

 We managed to have the GPU passthrough of NVIDIA cards using Xen 4.1.2 but 
 ONLY with the xenapi (actually the whole XCP toolstack), with libvirt/Xen 
 4.1.2 and even libvirt/Xen 4.1.3, I only manage to apss through radeon GPUs, 
 the reason could be:

 1. The inability to pass the gfx_passthru parameter through libvirt (IIRC 
 this parameter passes the PCI device as the main VGA card and not a second 
 one).
 2. Bad FLR reset  support (or other PCI low-level function) from the NVIDIA 
 boards
   

I've noticed this issue with some Broadcom multifunction nics.  No FLR,
so fallback to secondary bus reset, which is problematic if another
function is being used by a different vm.

 3. something else entirely.

 Anyway, like I said, this GPU passthrough of nvidia worked well with XCP 
 using xenapi but not with libvirt/Xen
   

Hmm, would be nice to get that fixed.  To date, I haven't tried GPU
passthrough with Xen so I'm not familiar with the issues.

 Now, as for the libvirt/Xen setup we have, I don't know if I would call it 
 stable but it does the job as a POC cloud and is actually used by real people 
 with real GPU needs (for example developing on OpenCL 1.2), the main thing is 
 that it seamlessly integrates with openstack (because of libvirt) and  with 
 the instance_type_extra_specs, you can actually add a couple of these 
 special nodes to an existing plain KVM cloud and they will receive the 
 instances requesting GPUs without any problem.

 the setup:
 (this only refers to compute nodes as controller nodes are un-modified)

 1. Install Centos 6.2 and make your own project Zeus (transforming a centos 
 in Xen) 
 http://www.howtoforge.com/virtualization-with-xen-on-centos-6.2-x86_64-paravirtualization-and-hardware-virtualization
  (first page only and skip the bridge setup as openstack-nova-compute does 
 this at startup).  You end up with a Xen hypervisor with libvirt, the libvirt 
 patch is actually a single-line config change IIRC.  Pretty straight-forward.

 2. Install openstack-nova from EPEL (so all this refers only to ESSEX, 
 openstack 2012.1)

 3. configure the compute node accordingly (libvirt_type=xen)

 That's the first part, at this point, you can spawn a VM, and attach a GPU 
 manually with:

 virsh nodedev-dettach pci__02_00_01
 (edit the VM's nova libvirt.xml to add a pci node dev definition like this: 
 http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/chap-Virtualization-PCI_passthrough.html
  )
 virsh define libvirt.xml
 virsh start instance-000x

 Now, this is all manual and we wish to automate this in openstack, so this is 
 what I've done, I currently can launch VMs in my cloud and the passthrough 
 occurs without any intervention.

 These files were modified from an original essex installation to make this 
 possible:

 (on the controller)
 create a g1.small instance_type with {'free_gpus': '1'} as 
 instance_type_extra_specs
 select the compute_filter filter to enforce extra_specs in scheduling (also 
 the function host_passes of the filter is slightly modified so that it read 
 key=value instead of key=value... (free_gpus=1 is good, does not need to be 
 strictly equals to 1)
   

I think this has already been done for you in Folsom via the
ComputeCapabilitiesFilter and Jinwoo Suh's addition of
instance_type_extra_specs operators.  See commit 90f77d71.

 (on the compute node)
 nova/virt/libvirt/gpu.py
   a new file that contains functions like detach_all_gpus, get_free_gpus, 
 simple stuff 

Have you considered pushing this upstream?

 using virsh and lspci
 nova/virt/libvirt/connection.py
   calls gpu.detach_all_gpus on startup (virsh nodedev-dettach)
   builds the VM libvirt.xml as normal but also adds the pci nodedev 
 definition
   advertises free_gpus capabilities so that the scheduler gets it through 
 host_state calls

 that's about it, with that we get:

 1. compute nodes that detach all GPUS on startup
 2. compute nodes that advertise the nb of free gpus to the scheduler
 3. compute nodes that are able to build the VMs libvirt.xml with a valid, 
 free GPU definition when a VM is launched
 4. controller that runs a scheduler that knows where to send VMs (free_gpus 
 = 1)

 It does the trick for now, with RADEON 6950 I get 100% success, I spawn a VM 
 and in 20

Re: [Openstack] What is the most commonly used Hypervisor and toolset combination?

2012-07-18 Thread Jim Fehlig
John Garbutt wrote:
 To my knowledge, if you want to use Xen, using XCP or XenServer (i.e. using 
 XenAPI driver) is the way to go. If you look at the contributions to the 
 drivers, you can have a good guess at who is using them.

 I know people are going into production on XenAPI, not heard about 
 Xen+libvirt in production. Having said this, I have seen some fixes to Folsom 
 around Xen + libvirt, I think from SUSE?
   

Yes, I'm slowly working on improving support for xen.org Xen via the
libvirt driver and hope to have these improvements in for the Folsom
release.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] debugging a db migration script

2012-07-17 Thread Jim Fehlig
Hengqing Hu wrote:
 There is a test in nova:

 You can run run_tests.sh in your nova root like this:
 ./run_tests.sh -v test_migrations

Thanks for the tip!

 If there is something wrong in the migration script,
 it will show up in the console.

Indeed.  And easy to fix problems once you know the errors :).

Regards,
Jim


 On 07/17/2012 11:59 AM, Jim Fehlig wrote:
 I'm working on a patch that adds a column to the compute_nodes table in
 the nova db, but it seems my db migration script fails when calling 'db
 sync' in stack.sh.  I tried running the command manually, same failure:

 stack@virt71:~ /opt/stack/nova/bin/nova-manage --debug -v db
 sync2012-07-16 21:42:52 DEBUG nova.utils [-] backend module
 'nova.db.sqlalchemy.migration' from
 '/opt/stack/nova/nova/db/sqlalchemy/migration.pyc' from (pid=19230)
 __get_backend /opt/stack/nova/nova/utils.py:484
 /usr/lib64/python2.6/site-packages/sqlalchemy/pool.py:681:
 SADeprecationWarning: The 'listeners' argument to Pool (and
 create_engine()) is deprecated.  Use event.listen().
Pool.__init__(self, creator, **kw)
 /usr/lib64/python2.6/site-packages/sqlalchemy/pool.py:159:
 SADeprecationWarning: Pool.add_listener is deprecated.  Use
 event.listen()
self.add_listener(l)
 Command failed, please check log for more info

 I can't find anything useful in any log (/var/log/*, /opt/stack/log/*).
 I ran the above in strace and saw my migration script was opened and
 then shortly thereafter writing of Command failed, please check log for
 more info and exit(1) :).

 The patch also adds a 'hypervisor_type' column to the instances table,
 and that migration script succeeds!

 Any hints for debugging a db migration script?

 Thanks,
 Jim


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] debugging a db migration script

2012-07-17 Thread Jim Fehlig
Adam Young wrote:
 On 07/16/2012 11:59 PM, Jim Fehlig wrote:
 I'm working on a patch that adds a column to the compute_nodes table in
 the nova db, but it seems my db migration script fails when calling 'db
 sync' in stack.sh.  I tried running the command manually, same failure:

 stack@virt71:~ /opt/stack/nova/bin/nova-manage --debug -v db
 sync2012-07-16 21:42:52 DEBUG nova.utils [-] backend module
 'nova.db.sqlalchemy.migration' from
 '/opt/stack/nova/nova/db/sqlalchemy/migration.pyc' from (pid=19230)
 __get_backend /opt/stack/nova/nova/utils.py:484
 /usr/lib64/python2.6/site-packages/sqlalchemy/pool.py:681:
 SADeprecationWarning: The 'listeners' argument to Pool (and
 create_engine()) is deprecated.  Use event.listen().
Pool.__init__(self, creator, **kw)
 /usr/lib64/python2.6/site-packages/sqlalchemy/pool.py:159:
 SADeprecationWarning: Pool.add_listener is deprecated.  Use
 event.listen()
self.add_listener(l)
 Command failed, please check log for more info

 I can't find anything useful in any log (/var/log/*, /opt/stack/log/*).
 I ran the above in strace and saw my migration script was opened and
 then shortly thereafter writing of Command failed, please check log for
 more info and exit(1) :).

 The patch also adds a 'hypervisor_type' column to the instances table,
 and that migration script succeeds!

 Any hints for debugging a db migration script?

 Thanks,
 Jim


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 I just went through this with a Keystone change.  What I would suggest
 is:

 1.  Get a blank Database.
 2.  Run the DB migration without  your scripts.
 3. Get the SQL you want to run to work correctly from that step.
 4. Add in a Database appropriate SQL script that has exactly your SQL
 from above.
 5. Run the whole migration.

 Yes, it is a labor intensive and painful as it sounds.  You want to
 make sure that you have exactly the preconditions that your script
 expects.  What I had to do was actually go back and modify earlier DB
 init code due to the SQL alchemy column definition changing.

 Note this change:
 https://review.openstack.org/#/c/7754/9/keystone/common/sql/migrate_repo/versions/001_add_initial_tables.py

 I now have to explicitly create the token table to make sure it is the
 state it would be today.  Since my code had modified the token table,
 had I not done this,  by the end of stage 1 SQL processing, the
 database would have had this table in stage 2 state.

 Then I Went and added a SQL script for modifying the table.  Since I
 was altering a a table without dumping the data in it, it was a
 non-trivial change that SQL Alchemy couldn't handl (AFAICT). Instead,
 I added a SQL script:
 https://review.openstack.org/#/c/7754/9/keystone/common/sql/migrate_repo/versions/002_mysql_upgrade.sql


 Make sure you have a comparable Downgrade script, too.

 https://review.openstack.org/#/c/7754/9/keystone/common/sql/migrate_repo/versions/002_mysql_downgrade.sql

Thanks for the details Adam!  This is really helpful.

Regards,
Jim




 For Keystone, we run the upgrade using a stand alone executable
 keystone/bin/keystone-manage.  In nove, it looks like there is
 bin/nova-manage to do the same thing.  I am running using Eclipse and
 PyDev as my development environment, and using the integrated debugger
 to step through the code.  Makes it a lot less painful to debug.



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Support for snapshot of LVM backend image

2012-07-16 Thread Jim Fehlig
Boris Filippov wrote:
 But qemu can also write the vm state outside of the backend image, which
 should be usable with all image backends.
   

 Use that instead managedSave? This should suffice. Save VM state,
 suspend domain - do snapshot - restore VM with previous state?
   

For best results, I think you would need to retain the vm state, which
contains the cache buffers, and use it in conjunction with the
snapshot.  Otherwise you only have a crash-consistent snapshot right?  I
suppose modern filesystems and databases can cope though.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] One question on the compute_filter

2012-07-16 Thread Jim Fehlig
Jiang, Yunhong wrote:

 Hi, Joseph

  I’m working on the patch for blueprints
 https://blueprints.launchpad.net/nova/+spec/update-flavor-key-value,
 to add/delete the extra_specs for flavor through nova-manage. I’m
 still setting up my environment to push the patch.

  

  However, when I was testing my patch, I noticed that
 compute_filter assume it will handle all of the “extra_specs”. If it
 can’t find corresponding key in the capabilities, it will fail to pass
 the host. IMHO, this is a bit overkill. For example, currently the
 trusted_filter.py will use the extra_specs to check if trusted_host is
 required, that means compute filter and trusted filter can’t be used
 at the same time.

  I think compute filter should define explicitly all keys that
 it takes care, like cpu_info, cpu_arch, xpu_arch, and only check the
 corresponding extra_specs key/value pair? After all, extra_specs is
 not compute_extra_specs?

  I noticed the patch in
 https://review.openstack.org/#/c/8089/, but seems this patch will not
 fix this issue still.

  

 Any idea or suggestion? I’m glad to create patch if any conclusion on
 this issue.


I have been working on a patch [1] that allows the ComputeFilter to
filter on (architecture, hypervisor_type, vm_mode) triplet as specified
in the *instance properties*.  In the blueprint you mention, I see
cpu_type=itanium as an example of a key/value in the extra_specs of
instance types.  Are architecture, preferred hypervisor, and vm_mode (PV
vs HVM) properties of the image (which are used to populate the
instance_properties) or properties of a flavor (used to populate
instance_type)?  IMO, they are properties of an image.

What are the other commonly used extra_specs that are being checked by
the ComputeFilter?  Are they properties of an image or a flavor? 
Perhaps the checks for extra_specs can be removed from the ComputeFilter
entirely and done by other filters as you mention.

Regards,
Jim

[1] https://lists.launchpad.net/openstack/msg14121.html


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [RFC] Add more host checks to the compute filter

2012-07-03 Thread Jim Fehlig
Hi Daniel,

Attached is a patch that implements filtering on (architecture,
hypervisor_type, vm_mode) tuple as was discussed in this previous patch

https://review.openstack.org/#/c/9110/

CC'ing Chuck since he is the author of the ArchFilter patch.

Feedback appreciated before sending this off to gerrit.

Regards,
Jim
From bc96fdf618a2b9426f4c5db59fc087f849ac9873 Mon Sep 17 00:00:00 2001
From: Jim Fehlig jfeh...@suse.com
Date: Mon, 25 Jun 2012 15:54:43 -0600
Subject: [PATCH] Add more host checks to the compute filter

As discussed in a previous version of this patch [1], this change adds
checks in the ComputeFilter to verify hosts can support the
(architecture, hypervisor_type, vm_mode) tuple specified in the instance
properties.

Adding these checks to the compute filter seems consistent with its
definition [2]:

ComputeFilter - checks that the capabilities provided by the compute service
satisfy the extra specifications, associated with the instance type.

[1] https://review.openstack.org/#/c/9110/
[2] https://github.com/openstack/nova/blob/master/doc/source/devref/filter_scheduler.rst

Change-Id: I1fcd7f9c706184701ca02f7d1672541d26c07f31
---
 nova/compute/api.py|4 +-
 .../versions/108_instance_hypervisor_type.py   |   46 ++
 nova/db/sqlalchemy/models.py   |1 +
 nova/scheduler/filters/arch_filter.py  |   44 --
 nova/scheduler/filters/compute_filter.py   |   56 ++-
 nova/tests/scheduler/test_host_filters.py  |  160 +---
 6 files changed, 211 insertions(+), 100 deletions(-)

diff --git a/nova/compute/api.py b/nova/compute/api.py
index 1e3ebf1..008bdd6 100644
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -323,7 +323,9 @@ class API(base.Base):
 return value
 
 options_from_image = {'os_type': prop('os_type'),
-  'vm_mode': prop('vm_mode')}
+  'architecture': prop('architecture'),
+  'vm_mode': prop('vm_mode'),
+  'hypervisor_type': prop('hypervisor_type')}
 
 # If instance doesn't have auto_disk_config overridden by request, use
 # whatever the image indicates
diff --git a/nova/db/sqlalchemy/migrate_repo/versions/108_instance_hypervisor_type.py b/nova/db/sqlalchemy/migrate_repo/versions/108_instance_hypervisor_type.py
new file mode 100644
index 000..f68a6a4
--- /dev/null
+++ b/nova/db/sqlalchemy/migrate_repo/versions/108_instance_hypervisor_type.py
@@ -0,0 +1,46 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright 2012 OpenStack LLC.
+#
+#Licensed under the Apache License, Version 2.0 (the License); you may
+#not use this file except in compliance with the License. You may obtain
+#a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an AS IS BASIS, WITHOUT
+#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#License for the specific language governing permissions and limitations
+#under the License.
+
+from sqlalchemy import Column, Integer, MetaData, String, Table
+
+
+def upgrade(migrate_engine):
+meta = MetaData()
+meta.bind = migrate_engine
+
+# add column:
+instances = Table('instances', meta,
+  Column('id', Integer(), primary_key=True, nullable=False)
+  )
+hypervisor_type = Column('hypervisor_type',
+ String(length=255, convert_unicode=False,
+assert_unicode=None, unicode_error=None,
+_warn_on_bytestring=False),
+ nullable=True)
+
+instances.create_column(hypervisor_type)
+
+
+def downgrade(migrate_engine):
+meta = MetaData()
+meta.bind = migrate_engine
+
+# drop column:
+instances = Table('instances', meta,
+  Column('id', Integer(), primary_key=True, nullable=False)
+  )
+
+instances.drop_column('hypervisor_type')
diff --git a/nova/db/sqlalchemy/models.py b/nova/db/sqlalchemy/models.py
index 3359891..30f23e6 100644
--- a/nova/db/sqlalchemy/models.py
+++ b/nova/db/sqlalchemy/models.py
@@ -253,6 +253,7 @@ class Instance(BASE, NovaBase):
 
 os_type = Column(String(255))
 architecture = Column(String(255))
+hypervisor_type = Column(String(255))
 vm_mode = Column(String(255))
 uuid = Column(String(36))
 
diff --git a/nova/scheduler/filters/arch_filter.py b/nova/scheduler/filters/arch_filter.py
deleted file mode 100644
index 1f11d07..000
--- a/nova/scheduler/filters/arch_filter.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) 2011-2012 OpenStack, LLC
-# Copyright (c) 2012 Canonical Ltd
-# All Rights Reserved.
-#
-#Licensed under

[Openstack] [nova] RFC - filtering compute nodes in a mixed environment

2012-06-25 Thread Jim Fehlig
In a previous thread [1], I mentioned two possibilities for controlling
the scheduling of instances to an appropriate compute node in a mixed
node (Xen, KVM) environment.  The first approach uses availability
zones, the second uses the existing vm_mode image property.  Folks
seemed to agree on the latter, and the attached patch adds a check to
the compute filter to ensure the compute node can accommodate the
instance type.

The compute filter seems like the right place to add this check, but
posting as a RFC since I'm not sure if this would be agreeable to
everyone as compared to a custom filter for example.

Regards,
Jim

[1]
http://openstack.markmail.org/search/?q=improve%20xen#query:improve%20xen+page:1+mid:knmnylknf2imnruy+state:results

From bb8777a415d5db22b83971357882261fbef092a9 Mon Sep 17 00:00:00 2001
From: Jim Fehlig jfeh...@suse.com
Date: Mon, 25 Jun 2012 15:54:43 -0600
Subject: [PATCH] Add check for vm_mode in compute filter

Add a check in the scheduler compute filter to see if the compute service
supports the vm_mode specified in the instance properties.

As mentioned in a previous thread [1], there needs to be a way to control
scheduling of instances to an appropriate node in a mixed compute node
environment.  The existing vm_mode property, in conjuction with the
additional_compute_capabilities flag, provides a mechanism to filter
appropriate nodes.

[1] http://openstack.markmail.org/search/?q=improve%20xen#query:improve%20xen+page:1+mid:knmnylknf2imnruy+state:results
---
 nova/scheduler/filters/compute_filter.py  |   20 +++-
 nova/tests/scheduler/test_host_filters.py |   26 ++
 2 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/nova/scheduler/filters/compute_filter.py b/nova/scheduler/filters/compute_filter.py
index 5409d3d..5187c39 100644
--- a/nova/scheduler/filters/compute_filter.py
+++ b/nova/scheduler/filters/compute_filter.py
@@ -22,7 +22,8 @@ LOG = logging.getLogger(__name__)
 
 
 class ComputeFilter(filters.BaseHostFilter):
-HostFilter hard-coded to work with InstanceType records.
+HostFilter hard-coded to work with InstanceType and
+InstanceProperties records.
 
 def _satisfies_extra_specs(self, capabilities, instance_type):
 Check that the capabilities provided by the compute service
@@ -38,8 +39,21 @@ class ComputeFilter(filters.BaseHostFilter):
 return False
 return True
 
+def _satisfies_capabilities(self, capabilities, instance_props):
+Check that the capabilities provided by the compute service
+satisfies properties defined in the instance.
+vm_mode = instance_props.get('vm_mode')
+if not vm_mode:
+return True
+if capabilities.get(vm_mode, None):
+return True
+else:
+return False
+
 def host_passes(self, host_state, filter_properties):
 Return a list of hosts that can create instance_type.
+spec = filter_properties.get('request_spec', {})
+instance_props = spec.get('instance_properties', {})
 instance_type = filter_properties.get('instance_type')
 if host_state.topic != 'compute' or not instance_type:
 return True
@@ -57,4 +71,8 @@ class ComputeFilter(filters.BaseHostFilter):
 LOG.debug(_(%(host_state)s fails instance_type extra_specs 
 requirements), locals())
 return False
+if not self._satisfies_capabilities(capabilities, instance_props):
+LOG.debug(_(%(host_state)s fails instance_properties 
+requirements), locals())
+return False
 return True
diff --git a/nova/tests/scheduler/test_host_filters.py b/nova/tests/scheduler/test_host_filters.py
index 80da5ac..3a99292 100644
--- a/nova/tests/scheduler/test_host_filters.py
+++ b/nova/tests/scheduler/test_host_filters.py
@@ -349,6 +349,32 @@ class HostFiltersTestCase(test.TestCase):
 
 self.assertFalse(filt_cls.host_passes(host, filter_properties))
 
+def test_compute_filter_passes_additional_caps(self):
+self._stub_service_is_up(True)
+filt_cls = self.class_map['ComputeFilter']()
+req_spec = {'instance_properties': {'vm_mode': 'pv'}}
+capabilities = {'enabled': True, 'pv': True, 'hvm': True}
+service = {'disabled': False}
+filter_properties = {'instance_type': {'memory_mb': 1024},
+ 'request_spec' : req_spec}
+host = fakes.FakeHostState('host1', 'compute',
+{'free_ram_mb': 1024, 'capabilities': capabilities,
+ 'service': service})
+self.assertTrue(filt_cls.host_passes(host, filter_properties))
+
+def test_compute_filter_fails_additional_caps(self):
+self._stub_service_is_up(True)
+filt_cls = self.class_map['ComputeFilter']()
+req_spec = {'instance_properties': {'vm_mode': 'pv'}}
+capabilities = {'enabled': True

[Openstack] Improving Xen support in the libvirt driver

2012-05-09 Thread Jim Fehlig
Hi,

I've been tinkering with improving Xen support in the libvirt driver and
wanted to discuss a few issues before submitting patches.

Even the latest upstream release of Xen (4.1.x) contains a rather old
qemu, version 0.10.2, which rejects qcow2 images with cluster size 
64K.  The libvirt driver creates the COW image with cluster size of 2M. 
Is this for performance reasons?  Any objections to removing that option
and going with 'qemu-img create' default of 64K?

In a setup with both Xen and KVM compute nodes, I've found a few options
for controlling scheduling of an instance to the correct node.  One
option uses availability zones, e.g.

# nova.conf on Xen compute nodes
node_availability_zone=xen-hosts

# launching a Xen PV instance
nova boot --image xen-pv-image --availability_zone xen-hosts ...

The other involves a recent commit adding additional capabilities for
compute nodes [1] and the vm_mode image property [2] used by the
XenServer driver to distinguish HVM vs PV images.  E.g.

# nova.conf on Xen compute nodes
additional_compute_capabilities=pv,hvm

# Set vm_mode property on Xen image
glance update image-uuid vm_mode=pv

I prefer that latter approach since vm_mode will be needed in the
libvirt driver anyhow to create proper config for PV vs HVM instances. 
Currently, the driver creates usable config for PV instances, but needs
some adjustments for HVM.

Regards,
Jim

[1]
https://github.com/openstack/nova/commit/bd30eb36bbf2c5164ac47256355c543f6b77e064
[2]
https://github.com/openstack/nova/commit/bd30eb36bbf2c5164ac47256355c543f6b77e064


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp