On Fri, Jun 24, 2011 at 12:39:24PM +0100, Daniel P. Berrange wrote:
Originally I could have saved space, but now that sanlock mandates
alignment of 1MB / 8MB, this benefit has gone. Is there in fact any
compelling reason to allow either num_hosts or max_hosts to be
configurable at all ? If
On Fri, Jun 17, 2011 at 01:38:21PM +0100, Daniel P. Berrange wrote:
To make use of this capability the admin will need todo
several tasks:
- Mount an NFS volume (or other shared filesystem)
on /var/lib/libvirt/sanlock
- Configure 'host_id' in /etc/libvirt/qemu-sanlock.conf
with a
On Tue, Nov 30, 2010 at 10:20:38AM +, Daniel P. Berrange wrote:
The AttachObject/DetachObject calls are made by libvirtd, whenever
it is about todo something on behalf of the managed object holding
the lock. eg when libvirtd does disk hotplug it will do
$man = NewObject()
On Wed, Nov 24, 2010 at 03:08:41PM -0500, David Teigland wrote:
LOCK_MODE_NONE - the lock manager is unused, and it's up to the
application to do its own locking or coordination when accessing the
resource, e.g. if there's a clustered db or fs on the device that does its
own coordination. You
On Mon, Nov 22, 2010 at 06:09:21PM +, Daniel P. Berrange wrote:
+/*
+ * Flags to pass to 'load_drv' and also 'new_drv' method
+ * Plugins must support at least one of the modes. If a
+ * mode is unsupported, it must return an error
+ */
+enum {
+VIR_LOCK_MANAGER_MODE_CONTENT= (1
+/**
+ * virLockManagerNew:
+ * @man: the lock manager context
+ * @type: the type of process to be supervised
+ * @flags: optional flags, currently unused
+ *
+ * Initialize a new context to supervise a process, usually
+ * a virtual machine. If the lock driver requested a
+ * private
On Mon, Nov 22, 2010 at 06:09:21PM +, Daniel P. Berrange wrote:
+/*
+ * Flags to pass to 'load_drv' and also 'new_drv' method
+ * Plugins must support at least one of the modes. If a
+ * mode is unsupported, it must return an error
+ */
+enum {
+VIR_LOCK_MANAGER_MODE_CONTENT= (1
On Thu, Sep 16, 2010 at 01:50:46PM +0100, Daniel P. Berrange wrote:
The distinction is between what is possible, and what is recommended to
do. Even with the supervisor QEMU having separate SELinux contexts,
it is still desirable to lock down the supervisor to only be able to
access the VM
On Mon, Sep 13, 2010 at 02:29:49PM +0100, Daniel P. Berrange wrote:
We are looking into the possibility of not having a process manage a
VM but rather having the sync_manager process register with a central
daemon and exec into qemu (or anything else) so assuming there is a
process per VM
On Sun, Aug 22, 2010 at 12:13:16PM -0400, Perry Myers wrote:
On 08/19/2010 01:23 PM, David Teigland wrote:
On Thu, Aug 19, 2010 at 11:12:25AM -0400, David Teigland wrote:
I'm only aware of one goal, and the current plan is to implement it
correctly and completely. That goal is to lock vm
On Wed, Aug 18, 2010 at 07:44:18PM -0400, Perry Myers wrote:
On 08/11/2010 05:27 PM, Daniel P. Berrange wrote:
On Wed, Aug 11, 2010 at 03:37:12PM -0400, David Teigland wrote:
On Wed, Aug 11, 2010 at 05:59:55PM +0100, Daniel P. Berrange wrote:
On Tue, Aug 10, 2010 at 12:44:06PM -0400, David
On Thu, Aug 19, 2010 at 11:12:25AM -0400, David Teigland wrote:
I'm only aware of one goal, and the current plan is to implement it
correctly and completely. That goal is to lock vm images so if the vm
happens to run on two hosts, only one instance can access the image.
(That's slightly
Hi,
We've been working on a program called sync_manager that implements
shared-storage-based leases to protect shared resources. One way we'd like
to use it is to protect vm images that reside on shared storage,
i.e. preventing two vm's on two hosts from using the same image at once.
It's
On Wed, Aug 11, 2010 at 05:59:55PM +0100, Daniel P. Berrange wrote:
On Tue, Aug 10, 2010 at 12:44:06PM -0400, David Teigland wrote:
Hi,
We've been working on a program called sync_manager that implements
shared-storage-based leases to protect shared resources. One way we'd like
to use
On Wed, Aug 11, 2010 at 04:53:20PM -0400, Chris Lalancette wrote:
1. sm-S holds the lease, and is monitoring qemu
2. migration begins from S to D
3. libvirt-D runs sm-D: sync_manager -c qemu with the addition of a new
sync_manager option --receive-lease
4. sm-D writes its hostid D to
On Wed, Aug 11, 2010 at 03:07:29PM -0600, Eric Blake wrote:
On 08/11/2010 02:53 PM, Chris Lalancette wrote:
Unfortunately, this is not how migration works in qemu/kvm. Using your
nomenclature above, it's more like the following:
A guest is running on S. A migration is then initiated,
16 matches
Mail list logo