[libvirt] Error setting up graphics device:list index out of range

2008-11-03 Thread Frederik Himpe
When I try to create a new KVM VM x86_64, os variant Ubuntu Hardy, it 
fails at the end with this message:

Error setting up graphics device:list index out of range

Traceback (most recent call last):
  File /usr/share/virt-manager/virtManager/create.py, line 640, in 
finish
guest._graphics_dev = virtinst.VirtualGraphics
(type=virtinst.VirtualGraphics.TYPE_VNC)
  File /usr/lib/python2.5/site-packages/virtinst/Guest.py, line 207, in 
__init__
self.set_keymap(keymap)
  File /usr/lib/python2.5/site-packages/virtinst/Guest.py, line 219, in 
set_keymap
val = util.default_keymap()
  File /usr/lib/python2.5/site-packages/virtinst/util.py, line 293, in 
default_keymap
kt = s.split('')[1]
IndexError: list index out of range

What could be wrong here? I'm using virt-manager 0.6.0, libvirt 0.4.6, 
python-virtinst 0.400.0 on Mandriva Linux 2009.0 x86_64.

-- 
Frederik Himpe

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] set cache=off on -disk line in qemu driver if shareable/ (but not readonly/) is set.

2008-11-03 Thread Daniel P. Berrange
On Sat, Nov 01, 2008 at 06:20:27PM -0500, Charles Duffy wrote:
 As cache=off is necessary for clustering filesystems such as GFS (and 
 such is the point of shareable, yes?), I believe this is correct behavior.

Yes, I believe you are correct. On a single host our current setup is
sufficient, but if several VMs on different hosts are accessing the 
same underlying shared storage, then we do need to disable the caching
of reads. So cache=off for shared disks is the safest option.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] PATCH: Pass -uuid and -domid to QEMU if available

2008-11-03 Thread Daniel P. Berrange
Latest QEMU code now allows setting of the UUID in its SMBIOS data tables
via the -uuid command line arg. THis patch makes use of that feature by
pasing in the libvirt UUID for the VM. While doing this I also added 
support for the Xenner specific -domid flag

Daniel

Index: src/qemu_conf.c
===
RCS file: /data/cvs/libvirt/src/qemu_conf.c,v
retrieving revision 1.103
diff -u -p -r1.103 qemu_conf.c
--- src/qemu_conf.c 28 Oct 2008 17:43:25 -  1.103
+++ src/qemu_conf.c 3 Nov 2008 12:52:06 -
@@ -439,6 +439,10 @@ int qemudExtractVersionInfo(const char *
 flags |= QEMUD_CMD_FLAG_NO_REBOOT;
 if (strstr(help, -name))
 flags |= QEMUD_CMD_FLAG_NAME;
+if (strstr(help, -uuid))
+flags |= QEMUD_CMD_FLAG_UUID;
+if (strstr(help, -domid))
+flags |= QEMUD_CMD_FLAG_DOMID;
 if (strstr(help, -drive))
 flags |= QEMUD_CMD_FLAG_DRIVE;
 if (strstr(help, boot=on))
@@ -713,6 +717,8 @@ int qemudBuildCommandLine(virConnectPtr 
 int qenvc = 0, qenva = 0;
 const char **qenv = NULL;
 const char *emulator;
+char uuid[VIR_UUID_STRING_BUFLEN];
+char domid[50];
 
 uname(ut);
 
@@ -723,6 +729,8 @@ int qemudBuildCommandLine(virConnectPtr 
 !ut.machine[4])
 ut.machine[1] = '6';
 
+virUUIDFormat(vm-def-uuid, uuid);
+
 /* Need to explicitly disable KQEMU if
  * 1. Arch matches host arch
  * 2. Guest is 'qemu'
@@ -802,6 +810,7 @@ int qemudBuildCommandLine(virConnectPtr 
 
 snprintf(memory, sizeof(memory), %lu, vm-def-memory/1024);
 snprintf(vcpus, sizeof(vcpus), %lu, vm-def-vcpus);
+snprintf(domid, sizeof(domid), %d, vm-def-id);
 
 ADD_ENV_LIT(LC_ALL=C);
 
@@ -834,6 +843,15 @@ int qemudBuildCommandLine(virConnectPtr 
 ADD_ARG_LIT(-name);
 ADD_ARG_LIT(vm-def-name);
 }
+if (qemuCmdFlags  QEMUD_CMD_FLAG_UUID) {
+ADD_ARG_LIT(-uuid);
+ADD_ARG_LIT(uuid);
+}
+if (qemuCmdFlags  QEMUD_CMD_FLAG_DOMID) {
+ADD_ARG_LIT(-domid);
+ADD_ARG_LIT(domid);
+}
+
 /*
  * NB, -nographic *MUST* come before any serial, or monitor
  * or parallel port flags due to QEMU craziness, where it
Index: src/qemu_conf.h
===
RCS file: /data/cvs/libvirt/src/qemu_conf.h,v
retrieving revision 1.43
diff -u -p -r1.43 qemu_conf.h
--- src/qemu_conf.h 23 Oct 2008 13:18:18 -  1.43
+++ src/qemu_conf.h 3 Nov 2008 12:52:06 -
@@ -44,6 +44,8 @@ enum qemud_cmd_flags {
 QEMUD_CMD_FLAG_DRIVE  = (1  3),
 QEMUD_CMD_FLAG_DRIVE_BOOT = (1  4),
 QEMUD_CMD_FLAG_NAME   = (1  5),
+QEMUD_CMD_FLAG_UUID   = (1  6),
+QEMUD_CMD_FLAG_DOMID  = (1  7), /* Xenner only */
 };
 
 /* Main driver state */
Index: src/qemu_driver.c
===
RCS file: /data/cvs/libvirt/src/qemu_driver.c,v
retrieving revision 1.144
diff -u -p -r1.144 qemu_driver.c
--- src/qemu_driver.c   29 Oct 2008 14:32:41 -  1.144
+++ src/qemu_driver.c   3 Nov 2008 12:52:07 -
@@ -860,10 +860,12 @@ static int qemudStartVMDaemon(virConnect
 return -1;
 }
 
+vm-def-id = driver-nextvmid++;
 if (qemudBuildCommandLine(conn, driver, vm,
   qemuCmdFlags, argv, progenv,
   tapfds, ntapfds, migrateFrom)  0) {
 close(vm-logfile);
+vm-def-id = -1;
 vm-logfile = -1;
 return -1;
 }
@@ -901,10 +903,10 @@ static int qemudStartVMDaemon(virConnect
 ret = virExec(conn, argv, progenv, keepfd, vm-pid,
   vm-stdin_fd, vm-stdout_fd, vm-stderr_fd,
   VIR_EXEC_NONBLOCK);
-if (ret == 0) {
-vm-def-id = driver-nextvmid++;
+if (ret == 0)
 vm-state = migrateFrom ? VIR_DOMAIN_PAUSED : VIR_DOMAIN_RUNNING;
-}
+else
+vm-def-id = -1;
 
 for (i = 0 ; argv[i] ; i++)
 VIR_FREE(argv[i]);
Index: tests/qemuxml2argvtest.c
===
RCS file: /data/cvs/libvirt/tests/qemuxml2argvtest.c,v
retrieving revision 1.32
diff -u -p -r1.32 qemuxml2argvtest.c
--- tests/qemuxml2argvtest.c10 Oct 2008 16:52:20 -  1.32
+++ tests/qemuxml2argvtest.c3 Nov 2008 12:52:07 -
@@ -43,7 +43,10 @@ static int testCompareXMLToArgvFiles(con
 
 memset(vm, 0, sizeof vm);
 vm.def = vmdef;
-vm.def-id = -1;
+if (extraFlags  QEMUD_CMD_FLAG_DOMID)
+vm.def-id = 6;
+else
+vm.def-id = -1;
 vm.pid = -1;
 
 flags = QEMUD_CMD_FLAG_VNC_COLON |
@@ -196,6 +199,8 @@ mymain(int argc, char **argv)
 DO_TEST(input-xen, 0);
 DO_TEST(misc-acpi, 0);
 DO_TEST(misc-no-reboot, 0);
+DO_TEST(misc-uuid, QEMUD_CMD_FLAG_NAME |
+QEMUD_CMD_FLAG_UUID | QEMUD_CMD_FLAG_DOMID);
 DO_TEST(net-user, 0);
 DO_TEST(net-virtio, 

Re: [libvirt] [PATCH]: Allow arbitrary paths to virStorageVolLookupByPath

2008-11-03 Thread Daniel P. Berrange
On Fri, Oct 31, 2008 at 12:58:17PM +0100, Chris Lalancette wrote:
 Daniel P. Berrange wrote:
  Personally, I think those are bad semantics for 
  virStorageBackendStablePath;
  assuming it succeeds, you should always be able to know that you have a 
  copy,
  regardless of whether the copy is the same as the original.  Should I 
  change
  virStorageBackendStablePath to those semantics, in which case your below 
  code
  would then be correct?
  
  Yes, I think that's worth doing - will also avoid the cast in the input
  arg there
 
 OK, updated patch attached; virStorageBackendStablePath now always returns a
 copy of the given string, so it's always safe to unconditionally VIR_FREE it. 
  I
 fixed up storage_backend_iscsi and storage_backend_disk to reflect this 
 change.
  I also re-worked the code as you suggested, and added a bit more error 
 checking.
 

 Index: src/storage_backend.c
 ===
 RCS file: /data/cvs/libvirt/src/storage_backend.c,v
 retrieving revision 1.24
 diff -u -r1.24 storage_backend.c
 --- src/storage_backend.c 28 Oct 2008 17:48:06 -  1.24
 +++ src/storage_backend.c 31 Oct 2008 11:56:33 -
 @@ -357,7 +357,7 @@
  char *
  virStorageBackendStablePath(virConnectPtr conn,
  virStoragePoolObjPtr pool,
 -char *devpath)
 +const char *devpath)
  {
  DIR *dh;
  struct dirent *dent;
 @@ -366,7 +366,7 @@
  if (pool-def-target.path == NULL ||
  STREQ(pool-def-target.path, /dev) ||
  STREQ(pool-def-target.path, /dev/))
 -return devpath;
 +return strdup(devpath);

Need to call virStorageReportError here on OOM.

  
  /* The pool is pointing somewhere like /dev/disk/by-path
   * or /dev/disk/by-id, so we need to check all symlinks in
 @@ -410,7 +410,7 @@
  /* Couldn't find any matching stable link so give back
   * the original non-stable dev path
   */
 -return devpath;
 +return strdup(devpath);

And here.

Since virStorageBackendStablePath() api contract says that it is responsible
for setting the errors upon failure.

 Index: src/storage_driver.c
 ===
 RCS file: /data/cvs/libvirt/src/storage_driver.c,v
 retrieving revision 1.13
 diff -u -r1.13 storage_driver.c
 --- src/storage_driver.c  21 Oct 2008 17:15:53 -  1.13
 +++ src/storage_driver.c  31 Oct 2008 11:56:34 -
 @@ -966,8 +966,34 @@
  
  for (i = 0 ; i  driver-pools.count ; i++) {
  if (virStoragePoolObjIsActive(driver-pools.objs[i])) {
 -virStorageVolDefPtr vol =
 -virStorageVolDefFindByPath(driver-pools.objs[i], path);
 +virStorageVolDefPtr vol;
 +virStorageBackendPoolOptionsPtr options;
 +
 +options = 
 virStorageBackendPoolOptionsForType(driver-pools.objs[i]-def-type);
 +if (options == NULL)
 +continue;
 +
 +if (options-flags  VIR_STORAGE_BACKEND_POOL_STABLE_PATH) {
 +const char *stable_path;
 +
 +stable_path = virStorageBackendStablePath(conn,
 +  
 driver-pools.objs[i],
 +  path);
 +/*
 + * virStorageBackendStablePath already does
 + * virStorageReportError if it fails; we just need to keep
 + * propagating the return code
 + */
 +if (stable_path == NULL)
 +return NULL;
 +
 +vol = virStorageVolDefFindByPath(driver-pools.objs[i],
 + stable_path);
 +VIR_FREE(stable_path);
 +}
 +else
 +vol = virStorageVolDefFindByPath(driver-pools.objs[i], 
 path);
 +
  
  if (vol)
  return virGetStorageVol(conn,

This looks good now.


Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Have any commands in libvirt to run a domain

2008-11-03 Thread Daniel P. Berrange
On Sat, Nov 01, 2008 at 07:21:36PM +0800, Ian jonhson wrote:
 Hi all,
 
 Are there any commands in console to start/suspend/resume/stop a domain?
 Is it possible to start a domain as following command:
 
 # libvirt-command -cpu cpu cycle -mem memory -disk disk quota
 -network network bandwith -template windows
 
 If not, can I build this command with existing libvirt API?

Look at the 'virsh' command  its manual page. It provides commnds for
defining a VM config, starting/stopping/suspending/resuming domains, 
and much much more.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3]: Read cmd stdout + stderr in virRun

2008-11-03 Thread Daniel P. Berrange
On Thu, Oct 30, 2008 at 02:06:35PM -0400, Cole Robinson wrote:
 The attached patch is my second cut at reading 
 stdout and stderr of the command virRun kicks
 off. There is no hard limit to the amount of
 data we read now, and we use a poll loop to
 avoid any possible full buffer issues.
 
 If stdout or stderr had any content, we DEBUG
 it, and if the command appears to fail we
 return stderr in the error message. So now,
 trying to stop a logical pool with active
 volumes will return:
 
 $ sudo virsh pool-destroy vgdata
 libvir: error : internal error '/sbin/vgchange -an vgdata' exited with 
 non-zero status 5 and signal 0:   Can't deactivate volume group vgdata with 
 2 open logical volume(s)
 error: Failed to destroy pool vgdata


 +fds[0].fd = outfd;
 +fds[0].events = POLLIN;
 +finished[0] = 0;
 +fds[1].fd = errfd;
 +fds[1].events = POLLIN;
 +finished[1] = 0;
 +
 +while(!(finished[0]  finished[1])) {
 +
 +if (poll(fds, ARRAY_CARDINALITY(fds), -1)  0) {
 +if (errno == EAGAIN)
 +continue;
 +goto pollerr;
 +}
 +
 +for (i = 0; i  ARRAY_CARDINALITY(fds); ++i) {
 +char data[1024], **buf;
 +int got, size;
 +
 +if (!(fds[i].revents))
 +continue;
 +else if (fds[i].revents  POLLHUP)
 +finished[i] = 1;
 +
 +if (!(fds[i].revents  POLLIN)) {
 +if (fds[i].revents  POLLHUP)
 +continue;
 +
 +ReportError(conn, VIR_ERR_INTERNAL_ERROR,
 +%s, _(Unknown poll response.));
 +goto error;
 +}
 +
 +got = read(fds[i].fd, data, sizeof(data));
 +
 +if (got == 0) {
 +finished[i] = 1;
 +continue;
 +}
 +if (got  0) {
 +if (errno == EINTR)
 +continue;
 +if (errno == EAGAIN)
 +break;
 +goto pollerr;
 +}
  
 -while ((ret = waitpid(childpid, exitstatus, 0) == -1)  errno == 
 EINTR);
 -if (ret == -1) {
 +buf = ((fds[i].fd == outfd) ? outbuf : errbuf);
 +size = (*buf ? strlen(*buf) : 0);
 +if (VIR_REALLOC_N(*buf, size+got+1)  0) {
 +ReportError(conn, VIR_ERR_NO_MEMORY, %s, realloc buf);
 +goto error;
 +}
 +memmove(*buf+size, data, got);
 +(*buf)[size+got] = '\0';
 +}
 +continue;
 +
 +pollerr:
 +ReportError(conn, VIR_ERR_INTERNAL_ERROR,
 +_(poll error: %s), strerror(errno));
 +goto error;
 +}


I think it'd be nice to move the I/O processing loop out  of the
virRun() function and into a separate helper functiion along the
lines of 

   virPipeReadUntilEOF(int outfd, int errfd, char **outbuf, char **errbuf)

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] PATCH: Pass -uuid and -domid to QEMU if available

2008-11-03 Thread Daniel Veillard
On Mon, Nov 03, 2008 at 12:52:40PM +, Daniel P. Berrange wrote:
 Latest QEMU code now allows setting of the UUID in its SMBIOS data tables
 via the -uuid command line arg. THis patch makes use of that feature by
 pasing in the libvirt UUID for the VM. While doing this I also added 
 support for the Xenner specific -domid flag

  Seems some people were waiting for this :-)
   http://blog.loftninjas.org/?p=261

 @@ -901,10 +903,10 @@ static int qemudStartVMDaemon(virConnect
  ret = virExec(conn, argv, progenv, keepfd, vm-pid,
vm-stdin_fd, vm-stdout_fd, vm-stderr_fd,
VIR_EXEC_NONBLOCK);
 -if (ret == 0) {
 -vm-def-id = driver-nextvmid++;
 +if (ret == 0)
  vm-state = migrateFrom ? VIR_DOMAIN_PAUSED : VIR_DOMAIN_RUNNING;
 -}
 +else
 +vm-def-id = -1;

  Okay, i had a bit of trouble with that part of the patch, but I assume
that since the  id can come from the config file, it's already set at
that point and we update to -1 only if the exec failed.

  Assuming I understood, +1 :-)

Daniel

-- 
Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
[EMAIL PROTECTED]  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH]: Allow arbitrary paths to virStorageVolLookupByPath

2008-11-03 Thread Daniel P. Berrange
On Mon, Nov 03, 2008 at 12:38:49PM +0100, Chris Lalancette wrote:
 Daniel P. Berrange wrote:
 
 Oops, of course.  I've fixed this up and committed the result; the final patch
 is attached.
 
 Thanks for the review,
 
 -- 
 Chris Lalancette

 Index: src/storage_backend.c
 ===
 RCS file: /data/cvs/libvirt/src/storage_backend.c,v
 retrieving revision 1.24
 diff -u -r1.24 storage_backend.c
 --- src/storage_backend.c 28 Oct 2008 17:48:06 -  1.24
 +++ src/storage_backend.c 3 Nov 2008 11:32:15 -
 @@ -357,16 +357,17 @@
  char *
  virStorageBackendStablePath(virConnectPtr conn,
  virStoragePoolObjPtr pool,
 -char *devpath)
 +const char *devpath)
  {
  DIR *dh;
  struct dirent *dent;
 +char *stablepath;
  
  /* Short circuit if pool has no target, or if its /dev */
  if (pool-def-target.path == NULL ||
  STREQ(pool-def-target.path, /dev) ||
  STREQ(pool-def-target.path, /dev/))
 -return devpath;
 +goto ret_strdup;
  
  /* The pool is pointing somewhere like /dev/disk/by-path
   * or /dev/disk/by-id, so we need to check all symlinks in
 @@ -382,7 +383,6 @@
  }
  
  while ((dent = readdir(dh)) != NULL) {
 -char *stablepath;
  if (dent-d_name[0] == '.')
  continue;
  
 @@ -407,10 +407,17 @@
  
  closedir(dh);
  
 + ret_strdup:
  /* Couldn't find any matching stable link so give back
   * the original non-stable dev path
   */
 -return devpath;
 +
 +stablepath = strdup(devpath);
 +
 +if (stablepath == NULL)
 +virStorageReportError(conn, VIR_ERR_NO_MEMORY, %s, _(dup path));

Don't bother with passing a message with any VIR_ERR_NO_MEMORY
errors - just use NULL. The message is totally ignored for this
error code, and 'dup path' is useless info for the end user anyway

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] New Libvirt Implementation - OpenNebula

2008-11-03 Thread Ruben S. Montero
Dear all,
You may find of interest a new implementation of the libvirt 
virtualization API. This new implementation adds support to OpenNebula, a 
distributed VM manager system. The implementation of libvirt on top of a 
distributed VM manager, like OpenNebula, provides an abstraction of a whole 
cluster of resources (each one with its hypervisor). In this way,  you can use 
any libvirt tool (e.g. virsh, virt-manager) and XML domain descriptions at a 
distributed level. 

For example, you may create a new domain with 'virsh create', then OpenNebula 
will look for a suitable resource, transfer the VM images and boot your VM 
using any of the supported hypervisors. The distributed management is 
completely transparent to the libvirt application. This is, a whole cluster 
can be managed as any other libvirt node.

The current implementation is targeted for libvirt 0.4.4, and includes a patch 
to the libvirt source tree (mainly to modify the autotools files), and a 
libvirt driver.

More information and download instructions can be found at:

* http://trac.opennebula.org/wiki/LibvirtOpenNebula
* http://www.opennebula.org


Cheers

Ruben
-- 
+---+
 Dr. Ruben Santiago Montero
 Associate Professor
 Distributed System Architecture Group (http://dsa-research.org)

 URL:http://dsa-research.org/doku.php?id=people:ruben
 Weblog: http://blog.dsa-research.org/?author=7
 
 GridWay, http://www.gridway.org
 OpenNEbula, http://www.opennebula.org
+---+

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] New Libvirt Implementation - OpenNebula

2008-11-03 Thread Daniel Veillard
On Mon, Nov 03, 2008 at 12:23:47PM +0100, Ruben S. Montero wrote:
 Dear all,
 You may find of interest a new implementation of the libvirt 
 virtualization API. This new implementation adds support to OpenNebula, a 
 distributed VM manager system. The implementation of libvirt on top of a 
 distributed VM manager, like OpenNebula, provides an abstraction of a whole 
 cluster of resources (each one with its hypervisor). In this way,  you can 
 use 
 any libvirt tool (e.g. virsh, virt-manager) and XML domain descriptions at a 
 distributed level. 
 
 For example, you may create a new domain with 'virsh create', then OpenNebula 
 will look for a suitable resource, transfer the VM images and boot your VM 
 using any of the supported hypervisors. The distributed management is 
 completely transparent to the libvirt application. This is, a whole cluster 
 can be managed as any other libvirt node.
 
 The current implementation is targeted for libvirt 0.4.4, and includes a 
 patch 
 to the libvirt source tree (mainly to modify the autotools files), and a 
 libvirt driver.
 
 More information and download instructions can be found at:
 
 * http://trac.opennebula.org/wiki/LibvirtOpenNebula
 * http://www.opennebula.org

  Interesting, but this raises a couple of questions:
- isn't OpenNebula in some way also an abstraction layer for the
  hypervisors, so in a sense a libvirt driver for OpenNebula 
  is a bit 'redundant' ? Maybe i didn't understood well the
  principles behind OpenNebula :-) (sorry first time I learn about
  this).
- what is the future of that patch ? Basically libvirt internals
  changes extremely fast, so unless a driver gets included as part
  of libvirt own code source, there is a lot of maintainance and
  usability problems resulting from the split. Do you intent to
  submit it for inclusion, or is that more a trial to gauge interest ?
  Submitting the driver for inclusion means the code will have to be
  reviewed, released under LGPL, and a voluteer should be available
  for future maintainance and integration issues.

  thanks !

Daniel

-- 
Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
[EMAIL PROTECTED]  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/3]: Log argv passed to virExec and virRun

2008-11-03 Thread Daniel P. Berrange
On Thu, Oct 30, 2008 at 02:06:20PM -0400, Cole Robinson wrote:
 The attached patch logs the the argv's passed to
 the virExec and virRun functions. There's a bit of
 trickery here: since virRun is just a wrapper for
 virExec, we don't want the argv string to be logged
 twice. 
 
 I addressed this by renaming virExec to __virExec,
 and keeping the original function name to simply
 debug the argv and then hand off control. This
 means anytime virExec is explictly called, the
 argv will be logged, but if functions wish to by
 pass that they can just call __virExec (which is
 what virRun does.)

I'm a little confused about why we can't just put the logging
calling directly in the existing virExec() function. Since the
first thing virRun() does is to call virExec() this would 
seem to be sufficient without need of a wrapper.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] set cache=off on -disk line in qemu driver if shareable/ (but not readonly/) is set.

2008-11-03 Thread Daniel Veillard
On Sat, Nov 01, 2008 at 06:20:27PM -0500, Charles Duffy wrote:
 As cache=off is necessary for clustering filesystems such as GFS (and  
 such is the point of shareable, yes?), I believe this is correct 
 behavior.

 Comments?

  Yes that sounds right, if shared over network we should not cache
locally in the OS,

applied and commited,

  thanks !

Daniel

-- 
Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
[EMAIL PROTECTED]  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Does libvirt support Solaris Zones?

2008-11-03 Thread Stuart Jansen
On Mon, 2008-11-03 at 11:39 +, Daniel P. Berrange wrote:
 I'm not aware of any support for Solaris Zones, or xVM  -that's
 VirtualBox based i guess? 

As usual, Sun marketing has decided to confuse people by overloading the
meaning of xVM. It originally meant Sun's Xen-based virtualization
solution. After purchasing VirtualBox, they decided to call it xVM
VirtualBox and the Xen-based product xVM Server.

http://en.wikipedia.org/wiki/Sun_xVM


--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/3]: Log argv passed to virExec and virRun

2008-11-03 Thread Cole Robinson
Daniel P. Berrange wrote:
 On Thu, Oct 30, 2008 at 02:06:20PM -0400, Cole Robinson wrote:
   
 The attached patch logs the the argv's passed to
 the virExec and virRun functions. There's a bit of
 trickery here: since virRun is just a wrapper for
 virExec, we don't want the argv string to be logged
 twice. 

 I addressed this by renaming virExec to __virExec,
 and keeping the original function name to simply
 debug the argv and then hand off control. This
 means anytime virExec is explictly called, the
 argv will be logged, but if functions wish to by
 pass that they can just call __virExec (which is
 what virRun does.)
 

 I'm a little confused about why we can't just put the logging
 calling directly in the existing virExec() function. Since the
 first thing virRun() does is to call virExec() this would 
 seem to be sufficient without need of a wrapper.

 Daniel
   

Two small benefits of the way this patch does it:

- We can tell by the debug output whether the argv
  is coming from virRun or from virExec called
  explicitly.

- We want the argv string available in virRun for
  error reporting. The patch allows us to avoid
  converting the argv to string twice.

I can rework the patch if you'd like, the above
points aren't deal breakers.

Thanks,
Cole

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Does libvirt support Solaris Zones?

2008-11-03 Thread Daniel P. Berrange
On Mon, Nov 03, 2008 at 11:31:16AM +, jovialGuy _ wrote:
 Hi!
 I have a quick question, does libvirt latest release supports Solaris Zones?
 I know it supports LDoms and xVM support is being integrated towards xVM
 Server. But which release to date supports Solaris Zones?

Sun has a fork of a fairly old libvirt release (0.3.something IIRC) in
which they've added LDoms support. The code is not present in the 
official releases of libvirt yet, and last time I looked it didn't
really comply with the XML format correctly, and still needed quite
alot of work done to it. Hopefully they'll find the time to re-submit 
the driver for inclusion in main libvirt codebase again in  the future.

There has been work in official releases to support Xen dom0 on Open
Solaris, but I think there are still some outstanding patches in the
Open Solaris repositories that aren't in our offcial releases.

I'm not aware of any support for Solaris Zones, or xVM  -that's VirtualBox
based i guess? 

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Does libvirt support Solaris Zones?

2008-11-03 Thread jovialGuy _
Hi!
I have a quick question, does libvirt latest release supports Solaris Zones?
I know it supports LDoms and xVM support is being integrated towards xVM
Server. But which release to date supports Solaris Zones?

Regards,
Jovial
--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [RFC] making (newly public) EventImpl interface more consistent

2008-11-03 Thread David Lively
Hi Folks -
  Since virEventRegisterImpl is now public (in libvirt.h), a nagging
concern of mine has become more urgent.  Essentially this callback gives
clients a way of registering their own handle (fd) watcher and timer
functionality for use by libvirt.
  What bugs me is the inconsistency between the handle-watcher and timer
interfaces: the timer add function returns a timer id, which is then
used to identify the timer to the update and remove functions.  But
the handle-watcher add / update / remove functions identify the watcher
by the handle (fd).  The semantics of registering the same handle twice
aren't specified (what happens when we pass that same fd in a subsequent
update or remove?).  Even worse, this doesn't allow one to manage
multiple watches on the same handle reasonably.
  So why not make the handle add function return a watch
id (analogous to the timer id returned by the timer add fn)?  And
then use this watch id to specify the handle-watcher in the update and
remove functions.  This conveniently handles multiple watches on the
same handle, and also makes the handle-watching interface more
consistent with the timer interface (which is registered at the same
time).  We'd pass both the watch id and the handle (fd) into the
watch-handler callback.
  I'd like to implement and submit this (along with fixups to the event
test code) if there are no objections.

Thanks,
Dave

P.S. I'm currently working on Java bindings to the new event code ...



--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] set cache=off on -disk line in qemu driver if shareable/ (but not readonly/) is set.

2008-11-03 Thread Dor Laor

Daniel Veillard wrote:

On Sat, Nov 01, 2008 at 06:20:27PM -0500, Charles Duffy wrote:
  
As cache=off is necessary for clustering filesystems such as GFS (and  
such is the point of shareable, yes?), I believe this is correct 
behavior.


Comments?



  Yes that sounds right, if shared over network we should not cache
locally in the OS,

applied and commited,

  thanks !

Daniel

cache=off should be the default case (or the similar O_DSYNC case).
cache=on should only be used for temporal usage and not really 
production data.

It's just unsafe to use caching if you data is important to you.
It's a hot issue in the qemu mailing list.

Cheers,
Dor
--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] set cache=off on -disk line in qemu driver if shareable/ (but not readonly/) is set.

2008-11-03 Thread Daniel Veillard
On Mon, Nov 03, 2008 at 06:37:45PM +0200, Dor Laor wrote:
 Daniel Veillard wrote:
 On Sat, Nov 01, 2008 at 06:20:27PM -0500, Charles Duffy wrote:
   
 As cache=off is necessary for clustering filesystems such as GFS (and 
  such is the point of shareable, yes?), I believe this is correct  
 behavior.

 Comments?
 

   Yes that sounds right, if shared over network we should not cache
 locally in the OS,

 applied and commited,

   thanks !

 Daniel
 cache=off should be the default case (or the similar O_DSYNC case).
 cache=on should only be used for temporal usage and not really  
 production data.
 It's just unsafe to use caching if you data is important to you.
 It's a hot issue in the qemu mailing list.

  My POV at the moment is that whatever safe default need to be should
be set in QEmu, and in libvirt we should allow to override the default.
I'm still waiting for the battle to end in QEmu, unfortunately I don't
really see a consensus, but I lost track a couple of weeks ago :-)
The patch looked okay because the fs is labelled as shared in the XML
config, so really that's user provided informations that a default
check in QEmu can't guess, hence overriding the default QEmu options
makes sense IMHO.

Daniel

-- 
Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
[EMAIL PROTECTED]  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] set cache=off on -disk line in qemu driver if shareable/ (but not readonly/) is set.

2008-11-03 Thread Daniel P. Berrange
On Mon, Nov 03, 2008 at 06:37:45PM +0200, Dor Laor wrote:
 Daniel Veillard wrote:
 On Sat, Nov 01, 2008 at 06:20:27PM -0500, Charles Duffy wrote:
   
 As cache=off is necessary for clustering filesystems such as GFS (and  
 such is the point of shareable, yes?), I believe this is correct 
 behavior.
 
 Comments?
 
 
   Yes that sounds right, if shared over network we should not cache
 locally in the OS,
 
 applied and commited,
 
   thanks !
 
 Daniel
 cache=off should be the default case (or the similar O_DSYNC case).
 cache=on should only be used for temporal usage and not really 
 production data.
 It's just unsafe to use caching if you data is important to you.
 It's a hot issue in the qemu mailing list.

I was under the impression its already been solved on qemu mailing list.
AFAIK, they changed the defalt option to ensure data safety upon host
crash, so we have no need to set any cache= option by default.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] New Libvirt Implementation - OpenNebula

2008-11-03 Thread Daniel Veillard
On Mon, Nov 03, 2008 at 05:26:34PM +0100, Ruben S. Montero wrote:
 Hi Daniel
 On Monday 03 November 2008 16:43:32 Daniel Veillard wrote:
 
 
Interesting, but this raises a couple of questions:
  - isn't OpenNebula in some way also an abstraction layer for the
hypervisors, so in a sense a libvirt driver for OpenNebula
is a bit 'redundant' ? Maybe i didn't understood well the
principles behind OpenNebula :-) (sorry first time I learn about
this).
 
 Yes you are right, OpenNebula provides an abstraction layer for A SET of 
 distributed resources (like Platform VM Orchestrator or VMWare DRS). In this 
 way, OpenNebula leverages the functionality provided by the underlying VM 
 hypervisors to provide a centralized management (allocation and re/allocation 
 of VMs, balance of workload) of a pool physical resources. 
 
 The libvirt API is just another interface to the OpenNebula system. The 
 beauty 
 is that you can manage a whole cluster of hypervisors using the libvirt 
 standard, i.e. in the same way you interact with a single machine. 

  After further reading, yes I understand, it's the reverse appraoch
from ovirt, where we use libvirt to build the distributed management.
One interesting point is that your driver would allow to access EC2
using libvirt APIS...

 For example, oVirt uses libvirt to interact with the physical nodes. With 
 OpenNebula+libvirt, one of the nodes managed with oVirt could be a whole 
 cluster. In this case you could use the great interface from oVirt to manage 
 several clusters. And you could abstract those applications from the details 
 of managing the cluster (for example, is there NFS in it?, group/user 
 policies...)

  This is a bit against the Node principle of libvirt, and could result
in some fun in the hardware discovery mode, but in general the approach
might work. Still we are looking at bits on the node to provide
capabilities of the hypervisor, which may break in your case, and
migration is defined as an operation between a domain in a given node
and a connection to another node, so the migration within the OpenNebula
cluster won't be expressable in a simple way with the normal libvirt
API. Except that things should work conceptually I think.

 Finally, and may be adding more confusion, OpenNebula also uses libvirt 
 underneath to interface with some of the hypervisors of the physical nodes 
 (e.g. KVM). 

  Ouch :-) okay !

  - what is the future of that patch ? Basically libvirt internals
changes extremely fast, so unless a driver gets included as part
of libvirt own code source, there is a lot of maintainance and
usability problems resulting from the split. Do you intent to
submit it for inclusion, or is that more a trial to gauge interest ?
Submitting the driver for inclusion means the code will have to be
reviewed, released under LGPL, and a voluteer should be available
for future maintainance and integration issues.
 
 
 Yes we are highly interested in contributing the driver. We have no problems 
 with the requirements and we can commit resources to maintain and integrate 
 the driver. Please let me know how we should proceed...

  Well well well ...
Basically the contributtion should be provided as a (set of) patch(es)
agaisnt libvirt CVS head. Preferably the code should follow the existing
coding guidelines of libvirt, reuse the existing infrastructure for
error, memory allocations, etc ... If make check syntax-check' compiles
cleanly with your code applied that's a good first start :-)
In general the inclusion takes a few iteration of reviews before being
pushed, and splitting patches into smaller chunks helps the review
process greatly.
I didn't yet took the time to look at the patch online, so I have no
idea a-priori of the work needed. Drivers are usually clean and
separate, the problem is have them in the code base to minimize
maintainance.

Daniel

-- 
Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
[EMAIL PROTECTED]  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Does libvirt support Solaris Zones?

2008-11-03 Thread Ryan Scott

Daniel P. Berrange wrote:

On Mon, Nov 03, 2008 at 11:31:16AM +, jovialGuy _ wrote:

Hi!
I have a quick question, does libvirt latest release supports Solaris Zones?
I know it supports LDoms and xVM support is being integrated towards xVM
Server. But which release to date supports Solaris Zones?


Sun has a fork of a fairly old libvirt release (0.3.something IIRC) in
which they've added LDoms support. The code is not present in the 
official releases of libvirt yet, and last time I looked it didn't

really comply with the XML format correctly, and still needed quite
alot of work done to it. Hopefully they'll find the time to re-submit 
the driver for inclusion in main libvirt codebase again in  the future.


Unfortunately, it's a case of too much to do and not enough time.  The 
LDoms port is currently on hold.




There has been work in official releases to support Xen dom0 on Open
Solaris, but I think there are still some outstanding patches in the
Open Solaris repositories that aren't in our offcial releases.


We're temporarily stuck on 0.4.0 for the time being, which makes 
forwarding-porting patches difficult.  I hope to update our internal 
gate to 0.4.6 within a month, which will allow me to send out some patches.




I'm not aware of any support for Solaris Zones, or xVM  -that's VirtualBox
based i guess? 


I would like to port libvirt to Zones, but it looks unlikely that I'll 
have the time to do so.


-Ryan



Daniel


--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH]: Allow arbitrary paths to virStorageVolLookupByPath

2008-11-03 Thread Chris Lalancette
Daniel P. Berrange wrote:
 diff -u -r1.24 storage_backend.c
 --- src/storage_backend.c28 Oct 2008 17:48:06 -  1.24
 +++ src/storage_backend.c31 Oct 2008 11:56:33 -
 @@ -357,7 +357,7 @@
  char *
  virStorageBackendStablePath(virConnectPtr conn,
  virStoragePoolObjPtr pool,
 -char *devpath)
 +const char *devpath)
  {
  DIR *dh;
  struct dirent *dent;
 @@ -366,7 +366,7 @@
  if (pool-def-target.path == NULL ||
  STREQ(pool-def-target.path, /dev) ||
  STREQ(pool-def-target.path, /dev/))
 -return devpath;
 +return strdup(devpath);
 
 Need to call virStorageReportError here on OOM.
 
  
  /* The pool is pointing somewhere like /dev/disk/by-path
   * or /dev/disk/by-id, so we need to check all symlinks in
 @@ -410,7 +410,7 @@
  /* Couldn't find any matching stable link so give back
   * the original non-stable dev path
   */
 -return devpath;
 +return strdup(devpath);
 
 And here.
 
 Since virStorageBackendStablePath() api contract says that it is responsible
 for setting the errors upon failure.

Oops, of course.  I've fixed this up and committed the result; the final patch
is attached.

Thanks for the review,

-- 
Chris Lalancette
Index: src/storage_backend.c
===
RCS file: /data/cvs/libvirt/src/storage_backend.c,v
retrieving revision 1.24
diff -u -r1.24 storage_backend.c
--- src/storage_backend.c	28 Oct 2008 17:48:06 -	1.24
+++ src/storage_backend.c	3 Nov 2008 11:32:15 -
@@ -357,16 +357,17 @@
 char *
 virStorageBackendStablePath(virConnectPtr conn,
 virStoragePoolObjPtr pool,
-char *devpath)
+const char *devpath)
 {
 DIR *dh;
 struct dirent *dent;
+char *stablepath;
 
 /* Short circuit if pool has no target, or if its /dev */
 if (pool-def-target.path == NULL ||
 STREQ(pool-def-target.path, /dev) ||
 STREQ(pool-def-target.path, /dev/))
-return devpath;
+goto ret_strdup;
 
 /* The pool is pointing somewhere like /dev/disk/by-path
  * or /dev/disk/by-id, so we need to check all symlinks in
@@ -382,7 +383,6 @@
 }
 
 while ((dent = readdir(dh)) != NULL) {
-char *stablepath;
 if (dent-d_name[0] == '.')
 continue;
 
@@ -407,10 +407,17 @@
 
 closedir(dh);
 
+ ret_strdup:
 /* Couldn't find any matching stable link so give back
  * the original non-stable dev path
  */
-return devpath;
+
+stablepath = strdup(devpath);
+
+if (stablepath == NULL)
+virStorageReportError(conn, VIR_ERR_NO_MEMORY, %s, _(dup path));
+
+return stablepath;
 }
 
 
Index: src/storage_backend.h
===
RCS file: /data/cvs/libvirt/src/storage_backend.h,v
retrieving revision 1.9
diff -u -r1.9 storage_backend.h
--- src/storage_backend.h	23 Oct 2008 11:32:22 -	1.9
+++ src/storage_backend.h	3 Nov 2008 11:32:15 -
@@ -50,6 +50,7 @@
 VIR_STORAGE_BACKEND_POOL_SOURCE_DIR = (12),
 VIR_STORAGE_BACKEND_POOL_SOURCE_ADAPTER = (13),
 VIR_STORAGE_BACKEND_POOL_SOURCE_NAME= (14),
+VIR_STORAGE_BACKEND_POOL_STABLE_PATH= (15),
 };
 
 enum partTableType {
@@ -138,7 +139,7 @@
 
 char *virStorageBackendStablePath(virConnectPtr conn,
   virStoragePoolObjPtr pool,
-  char *devpath);
+  const char *devpath);
 
 typedef int (*virStorageBackendListVolRegexFunc)(virConnectPtr conn,
  virStoragePoolObjPtr pool,
Index: src/storage_backend_disk.c
===
RCS file: /data/cvs/libvirt/src/storage_backend_disk.c,v
retrieving revision 1.16
diff -u -r1.16 storage_backend_disk.c
--- src/storage_backend_disk.c	23 Oct 2008 11:32:22 -	1.16
+++ src/storage_backend_disk.c	3 Nov 2008 11:32:15 -
@@ -109,8 +109,7 @@
 devpath)) == NULL)
 return -1;
 
-if (devpath != vol-target.path)
-VIR_FREE(devpath);
+VIR_FREE(devpath);
 }
 
 if (vol-key == NULL) {
@@ -447,7 +446,8 @@
 .deleteVol = virStorageBackendDiskDeleteVol,
 
 .poolOptions = {
-.flags = (VIR_STORAGE_BACKEND_POOL_SOURCE_DEVICE),
+.flags = (VIR_STORAGE_BACKEND_POOL_SOURCE_DEVICE|
+  VIR_STORAGE_BACKEND_POOL_STABLE_PATH),
 .defaultFormat = VIR_STORAGE_POOL_DISK_UNKNOWN,
 .formatFromString = virStorageBackendPartTableTypeFromString,
 .formatToString = virStorageBackendPartTableTypeToString,
Index: src/storage_backend_iscsi.c
===
RCS file: 

Re: [libvirt] New Libvirt Implementation - OpenNebula

2008-11-03 Thread Ruben S. Montero
On Monday 03 November 2008 17:59:33 Daniel Veillard wrote:

   This is a bit against the Node principle of libvirt, and could result
 in some fun in the hardware discovery mode, but in general the approach
 might work. Still we are looking at bits on the node to provide
 capabilities of the hypervisor, which may break in your case, and
 migration is defined as an operation between a domain in a given node
 and a connection to another node, so the migration within the OpenNebula
 cluster won't be expressable in a simple way with the normal libvirt
 API. Except that things should work conceptually I think.

You are totally right, this is putting the standard to the limit ;). There are 
some function calls that can not be implemented right away or, as you said, 
the semantics are slightly different. Maybe there is room to extend the API in 
the future, right now there is no standard way to interface a distributed VM 
Manager

 Basically the contributtion should be provided as a (set of) patch(es)
 agaisnt libvirt CVS head. Preferably the code should follow the existing
 coding guidelines of libvirt, reuse the existing infrastructure for
 error, memory allocations, etc ... If make check syntax-check' compiles
 cleanly with your code applied that's a good first start :-)
 In general the inclusion takes a few iteration of reviews before being
 pushed, and splitting patches into smaller chunks helps the review
 process greatly.
 I didn't yet took the time to look at the patch online, so I have no
 idea a-priori of the work needed. Drivers are usually clean and
 separate, the problem is have them in the code base to minimize
 maintainance.


Ok. It sounds fine. We will update our implementation to CVS head (right now 
the patch is targeted for 0.4.4), update licenses to LGPL, and we will check 
if 'make check syntax-check' works. Also We'll try to split the patch in self-
contained changes, so they are easy to review. I'll let you know when we are 
done...

Cheers

Ruben
-- 
+---+
 Dr. Ruben Santiago Montero
 Associate Professor
 Distributed System Architecture Group (http://dsa-research.org)

 URL:http://dsa-research.org/doku.php?id=people:ruben
 Weblog: http://blog.dsa-research.org/?author=7
 
 GridWay, http://www.gridway.org
 OpenNEbula, http://www.opennebula.org
+---+

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3]: Read cmd stdout + stderr in virRun

2008-11-03 Thread Cole Robinson
Daniel P. Berrange wrote:
 On Thu, Oct 30, 2008 at 02:06:35PM -0400, Cole Robinson wrote:
   
 The attached patch is my second cut at reading 
 stdout and stderr of the command virRun kicks
 off. There is no hard limit to the amount of
 data we read now, and we use a poll loop to
 avoid any possible full buffer issues.

 If stdout or stderr had any content, we DEBUG
 it, and if the command appears to fail we
 return stderr in the error message. So now,
 trying to stop a logical pool with active
 volumes will return:

 $ sudo virsh pool-destroy vgdata
 libvir: error : internal error '/sbin/vgchange -an vgdata' exited with 
 non-zero status 5 and signal 0:   Can't deactivate volume group vgdata 
 with 2 open logical volume(s)
 error: Failed to destroy pool vgdata
 

snip

 I think it'd be nice to move the I/O processing loop out  of the
 virRun() function and into a separate helper functiion along the
 lines of 

virPipeReadUntilEOF(int outfd, int errfd, char **outbuf, char **errbuf)

 Daniel
   


Okay, updated patch attached. Also addresses the point Jim
raised about poll.h

Thanks,
Cole
diff --git a/src/util.c b/src/util.c
index 691c85f..c66ee70 100644
--- a/src/util.c
+++ b/src/util.c
@@ -31,6 +31,7 @@
 #include unistd.h
 #include fcntl.h
 #include errno.h
+#include poll.h
 #include sys/types.h
 #include sys/stat.h
 #if HAVE_SYS_WAIT_H
@@ -414,6 +415,86 @@ virExec(virConnectPtr conn,
  flags);
 }
 
+static int
+virPipeReadUntilEOF(virConnectPtr conn, int outfd, int errfd,
+char **outbuf, char **errbuf) {
+
+struct pollfd fds[2];
+int i;
+int finished[2];
+
+fds[0].fd = outfd;
+fds[0].events = POLLIN;
+finished[0] = 0;
+fds[1].fd = errfd;
+fds[1].events = POLLIN;
+finished[1] = 0;
+
+while(!(finished[0]  finished[1])) {
+
+if (poll(fds, ARRAY_CARDINALITY(fds), -1)  0) {
+if (errno == EAGAIN)
+continue;
+goto pollerr;
+}
+
+for (i = 0; i  ARRAY_CARDINALITY(fds); ++i) {
+char data[1024], **buf;
+int got, size;
+
+if (!(fds[i].revents))
+continue;
+else if (fds[i].revents  POLLHUP)
+finished[i] = 1;
+
+if (!(fds[i].revents  POLLIN)) {
+if (fds[i].revents  POLLHUP)
+continue;
+
+ReportError(conn, VIR_ERR_INTERNAL_ERROR,
+%s, _(Unknown poll response.));
+goto error;
+}
+
+got = read(fds[i].fd, data, sizeof(data));
+
+if (got == 0) {
+finished[i] = 1;
+continue;
+}
+if (got  0) {
+if (errno == EINTR)
+continue;
+if (errno == EAGAIN)
+break;
+goto pollerr;
+}
+
+buf = ((fds[i].fd == outfd) ? outbuf : errbuf);
+size = (*buf ? strlen(*buf) : 0);
+if (VIR_REALLOC_N(*buf, size+got+1)  0) {
+ReportError(conn, VIR_ERR_NO_MEMORY, NULL);
+goto error;
+}
+memmove(*buf+size, data, got);
+(*buf)[size+got] = '\0';
+}
+continue;
+
+pollerr:
+ReportError(conn, VIR_ERR_INTERNAL_ERROR,
+_(poll error: %s), strerror(errno));
+goto error;
+}
+
+return 0;
+
+error:
+VIR_FREE(*outbuf);
+VIR_FREE(*errbuf);
+return -1;
+}
+
 /**
  * @conn connection to report errors against
  * @argv NULL terminated argv to run
@@ -433,43 +514,66 @@ int
 virRun(virConnectPtr conn,
const char *const*argv,
int *status) {
-int childpid, exitstatus, ret;
-char *argv_str;
+int childpid, exitstatus, execret, waitret;
+int ret = -1;
+int errfd = -1, outfd = -1;
+char *outbuf = NULL;
+char *errbuf = NULL;
+char *argv_str = NULL;
 
 if ((argv_str = virArgvToString(argv)) == NULL) {
 ReportError(conn, VIR_ERR_NO_MEMORY, _(command debug string));
-return -1;
+goto error;
 }
 DEBUG0(argv_str);
-VIR_FREE(argv_str);
 
-if ((ret = __virExec(conn, argv, NULL, NULL,
- childpid, -1, NULL, NULL, VIR_EXEC_NONE))  0)
-return ret;
+if ((execret = __virExec(conn, argv, NULL, NULL,
+ childpid, -1, outfd, errfd,
+ VIR_EXEC_NONE))  0) {
+ret = execret;
+goto error;
+}
+
+if (virPipeReadUntilEOF(conn, outfd, errfd, outbuf, errbuf)  0)
+goto error;
+
+if (outbuf)
+DEBUG(Command stdout: %s, outbuf);
+if (errbuf)
+DEBUG(Command stderr: %s, errbuf);
 
-while ((ret = waitpid(childpid, exitstatus, 0) == -1)  errno == EINTR);
-if (ret == -1) {
+while ((waitret = waitpid(childpid, exitstatus, 0) == -1) 
+errno == EINTR);
+

Re: [libvirt] New Libvirt Implementation - OpenNebula

2008-11-03 Thread Daniel P. Berrange
On Mon, Nov 03, 2008 at 08:32:54PM +0100, Ruben S. Montero wrote:
 On Monday 03 November 2008 17:59:33 Daniel Veillard wrote:
 
This is a bit against the Node principle of libvirt, and could result
  in some fun in the hardware discovery mode, but in general the approach
  might work. Still we are looking at bits on the node to provide
  capabilities of the hypervisor, which may break in your case, and
  migration is defined as an operation between a domain in a given node
  and a connection to another node, so the migration within the OpenNebula
  cluster won't be expressable in a simple way with the normal libvirt
  API. Except that things should work conceptually I think.
 
 You are totally right, this is putting the standard to the limit ;). There 
 are 
 some function calls that can not be implemented right away or, as you said, 
 the semantics are slightly different. Maybe there is room to extend the API 
 in 
 the future, right now there is no standard way to interface a distributed VM 
 Manager

This is a really interesting problem to figure out. We might like to
extend the node capabilities XML to provide information about the
cluster as a whole - we currently have  guest element describing
what guest virt types are supported by a HV connection, and a host
element describing a little about the host running the HV. It might
make sense to say that the host info is optional and in its place
provide some kind of 'cluster' / 'host group' information. I won't
try to suggest what now - we'll likely learn about what would be
useful through real world use of your initial driver functionality.


  Basically the contributtion should be provided as a (set of) patch(es)
  agaisnt libvirt CVS head. Preferably the code should follow the existing
  coding guidelines of libvirt, reuse the existing infrastructure for
  error, memory allocations, etc ... If make check syntax-check' compiles
  cleanly with your code applied that's a good first start :-)
  In general the inclusion takes a few iteration of reviews before being
  pushed, and splitting patches into smaller chunks helps the review
  process greatly.
  I didn't yet took the time to look at the patch online, so I have no
  idea a-priori of the work needed. Drivers are usually clean and
  separate, the problem is have them in the code base to minimize
  maintainance.
 
 
 Ok. It sounds fine. We will update our implementation to CVS head (right now 
 the patch is targeted for 0.4.4), update licenses to LGPL, and we will check 
 if 'make check syntax-check' works. Also We'll try to split the patch in self-
 contained changes, so they are easy to review. I'll let you know when we are 
 done...

When you update to work with latest CVS, I'd strongly recommend you make
use of the  brand new XML handling APIs we have in domain_conf.h. We have
switched all drivers over to use these shared internal APIs for parsing
the domain XML schema, so it would let you delete 90% of your one_conf.c
file.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] New Libvirt Implementation - OpenNebula

2008-11-03 Thread Stefan de Konink

Daniel P. Berrange wrote:

When you update to work with latest CVS, I'd strongly recommend you make
use of the  brand new XML handling APIs we have in domain_conf.h. We have
switched all drivers over to use these shared internal APIs for parsing
the domain XML schema, so it would let you delete 90% of your one_conf.c
file.


It is great(!) to see that someone actually started to care about this; 
but honestly requests for these things were before blundly rejected.


See: Comfortable lookup functions interface/block stats

Could someone elaborate why after 5 months I can tag this project as 
'suddenbreakoutofcommonsense'?



Stefan

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list