Re: [libvirt] some unsorted questions
On Thu, Jul 17, 2008 at 01:29:45AM +0200, Stefan de Konink wrote: Now personally I think it is smart to check if a domain is already defined, or in use. If this is not the case libvirtd and the client get this message: libvir: Xen Daemon error : GET operation failed: That doesn't seem to fit the scenario at all :) Why does lookupbyname behave so bad? Secondly; I am a bit distracted by the domids concept. These ids are not available before a domain is launched. I think it would be interesting to allow signed values. In this way the 'defined' not active domains would get a negative value and a running domain a positive value. (Dom0 gets 0) This would have far less implications than using an uuid through the codebase consistently (not speaking the about the extra overhead). The dichotomy is very simple, it comes from the following when running with Xen: - if the domain is running, we can do an hypercall to check informations about it using just the id, and in a few microseconds - if the domain is not running, we must do an HTTP request (hence the GET error) to xend to get any informations about it, that is also way more costly (at least in the Xen case) See the discussion with Dan yesterday. The presence of an ID usually means the domain is running and the hypervisor knows about it, if not running you have to query a database or some storage to learn about it and the ID has no meaning, that's a very different situation in practice, and you need to use an external identifier name or UUID about it. The dichotomy between internal identifiers and permanent external identifiers is present everywhere in computing, really that's nothing new, think DNS for example. Now for a negative ID, that just doesn't help, it won't be any faster than the name or UUID lookup, since the hypervisor doesn't know about it, and it would be libvirt having to maintain that ID, independantly to the hypervisor. Say if we assign -3 to a domain, we would have to build some storage mechanism to preserve that identifier mapping, and we would not be able to avoid the hypervisor from using +3 for another running domain at some point. You really gain nothing, except complexity and confusion. That just doesn't work IMHO. The GET operation failed used to be in the way in the past, that was fixed in one of the previous versions, but not knowing what you're using (version or API entry point) there is no diagnostic to be done. Get a debugger, put a breakpoint at __virRaiseError and see where it is coming from based on the backtrace. Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ [EMAIL PROTECTED] | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/ -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] some unsorted questions
On Thu, Jul 17, 2008 at 01:29:45AM +0200, Stefan de Konink wrote: Secondly; I am a bit distracted by the domids concept. These ids are not available before a domain is launched. I think it would be interesting to allow signed values. In this way the 'defined' not active domains would get a negative value and a running domain a positive value. (Dom0 gets 0) This would have far less implications than using an uuid through the codebase consistently (not speaking the about the extra overhead). There are 3 identifiers for domains - ID - unique amongst all running domains on a hypervisor - Name - unique amongst all domains on a hypervisor - UUID - unique amongst all domains in a datacenter So if you want to track inactive domains, you have a choice of name or UUID. The recommendation is always that applications use UUID to track domains internally. Name should mostly be used when interacting with a user, not for internal application tracking. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] repeat lookup by name in LookupByID
Daniel Veillard пишет: On Wed, Jul 16, 2008 at 08:10:21PM +0100, Daniel P. Berrange wrote: Yes, the documentation is wrong - all inactive VMs have an ID of -1, and thus lookup-by-ID is nonsensical for inactive VMs. If any application did make use of this change which falls back to lookup-by-name, then it would only ever work with OpenVZ and not any of the other libvirt drivers, which isn't useful behaviour. [...] Then the virLookupById description must be updated, I'm not against it, but we need to be coherent. Indeed, the docs need to be clarified. okay, what about * Try to find a domain based on the hypervisor ID number * Note that this won't work for inactive domains which have an ID of -1, * in that case a lookup based on the Name or UUId need to be done instead. Ok. In that case we may disable lookup-by-id in undefine subcommand. and then revert that specific part of the patch, as attached. Also I would make a 'make rebuild' in the doc directory and push the doc update Daniel Index: virsh.c === RCS file: /data/cvs/libvirt/src/virsh.c,v retrieving revision 1.155 diff -u -p -r1.155 virsh.c --- virsh.c 29 May 2008 14:56:12 - 1.155 +++ virsh.c 17 Jul 2008 09:04:17 - @@ -978,7 +978,8 @@ cmdUndefine(vshControl * ctl, vshCmd * c if (!vshConnectionUsability(ctl, ctl-conn, TRUE)) return FALSE; -if (!(dom = vshCommandOptDomain(ctl, cmd, domain, name))) +if (!(dom = vshCommandOptDomainBy(ctl, cmd, domain, name, + VSH_BYNAME|VSH_BYUUID))) return FALSE; if (virDomainUndefine(dom) == 0) { -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] repeat lookup by name in LookupByID
On Thu, Jul 17, 2008 at 01:20:59PM +0400, Evgeniy Sokolov wrote: Daniel Veillard ?: On Wed, Jul 16, 2008 at 08:10:21PM +0100, Daniel P. Berrange wrote: Yes, the documentation is wrong - all inactive VMs have an ID of -1, and thus lookup-by-ID is nonsensical for inactive VMs. If any application did make use of this change which falls back to lookup-by-name, then it would only ever work with OpenVZ and not any of the other libvirt drivers, which isn't useful behaviour. [...] Then the virLookupById description must be updated, I'm not against it, but we need to be coherent. Indeed, the docs need to be clarified. okay, what about * Try to find a domain based on the hypervisor ID number * Note that this won't work for inactive domains which have an ID of -1, * in that case a lookup based on the Name or UUId need to be done instead. Ok. In that case we may disable lookup-by-id in undefine subcommand. A, so that's why you were seeing the error. Yes, this makes sens because a VM has to be shutoff before undefine is allowed. Index: virsh.c === RCS file: /data/cvs/libvirt/src/virsh.c,v retrieving revision 1.155 diff -u -p -r1.155 virsh.c --- virsh.c 29 May 2008 14:56:12 - 1.155 +++ virsh.c 17 Jul 2008 09:04:17 - @@ -978,7 +978,8 @@ cmdUndefine(vshControl * ctl, vshCmd * c if (!vshConnectionUsability(ctl, ctl-conn, TRUE)) return FALSE; -if (!(dom = vshCommandOptDomain(ctl, cmd, domain, name))) +if (!(dom = vshCommandOptDomainBy(ctl, cmd, domain, name, + VSH_BYNAME|VSH_BYUUID))) return FALSE; if (virDomainUndefine(dom) == 0) { ACK Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] some unsorted questions
On Thu, Jul 17, 2008 at 01:29:45AM +0200, Stefan de Konink wrote: If this gets implemented I would suggest a call that fetches all domains from a running system and not only the defined or only the active ones. This is a good idea regardless. The current APIs require an application todo num = list number of domains for 1 to num lookup domain by id In the case of the Xen drivers, this requires O(n) calls to XenD which are rather expensive. XenD does actually have ability to return data about all domains in a single request. So if we had an API for fetching all domains at once it'd only require O(1) expensive XenD calls. I'd imagine something like this int virConnectListAllDomains(virConnectPtr conn, virDomainPtr **domains, int stateflags); The 'stateflags' parameter would be a bit-field where each bit corresponded to one of the virDomainState enumeration values. The 'domains' list would be allocated by libvirt, and filled in with all the domain objects, and the total number of domains as the return value. So as an example, listing all paused domains virDomainPtr *domains; int ndomains; ndomains = virConnectListAllDomains(conn, domains, (1 VIR_DOMAIN_PAUSED)); We probably want to define constants for the latter set of flags #define VIR_DOMAIN_LIST_NOSTATE (1 VIR_DOMAIN_NOSTATE) #define VIR_DOMAIN_LIST_RUNNING (1 VIR_DOMAIN_RUNNING) #define VIR_DOMAIN_LIST_BLOCKED (1 VIR_DOMAIN_BLOCKED) #define VIR_DOMAIN_LIST_PAUSED (1 VIR_DOMAIN_PAUSED) #define VIR_DOMAIN_LIST_SHUTDOWN (1 VIR_DOMAIN_SHUTDOWN) #define VIR_DOMAIN_LIST_SHUTOFF (1 VIR_DOMAIN_SHUTOFF) #define VIR_DOMAIN_LIST_CRASHED (1 VIR_DOMAIN_CRASHED) And some convenience combos: #define VIR_DOMAIN_LIST_ACTIVE (VIR_DOMAIN_LIST_NOSTATE | VIR_DOMAIN_LIST_RUNNING | VIR_DOMAIN_LIST_BLOCKED | VIR_DOMAIN_LIST_PAUSED | VIR_DOMAIN_LIST_SHUTDOWN | VIR_DOMAIN_LIST_CRASHED) #define VIR_DOMAIN_LIST_INACTIVE (VIR_DOMAIN_LIST_SHUTOFF) #define VIR_DOMAIN_LIST_ALL (~0) The same style API can be added for listing of virNetwork and virStoragePool objects. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] remove unnecessary V = NULL; stmts after VIR_FREE(V)
Doing a review (in progress), I spotted one of these, so went in search of others. They're harmless, so this is more a heads up than anything else. I am happy to defer application until the patch queue has been reduced. From d97af7667699529c216835d806d1f0c6f698a70d Mon Sep 17 00:00:00 2001 From: Jim Meyering [EMAIL PROTECTED] Date: Thu, 17 Jul 2008 12:05:44 +0200 Subject: [PATCH] remove unnecessary V = NULL; stmts after VIR_FREE(V) * src/domain_conf.c (virDomainChrDefParseXML) (virDomainNetDefParseXML): Likewise. * src/iptables.c (iptRuleFree): Likewise. * src/storage_backend.c (virStorageBackendRunProgRegex): Likewise. * src/test.c (testOpenFromFile): Likewise. * src/xm_internal.c (xenXMAttachInterface): Likewise. * src/xml.c (virDomainParseXMLOSDescHVM): Likewise. * src/xmlrpc.c (xmlRpcCallRaw): Likewise. --- src/domain_conf.c |5 + src/iptables.c|2 -- src/storage_backend.c |4 +--- src/test.c|8 ++-- src/xm_internal.c |1 - src/xml.c |1 - src/xmlrpc.c |1 - 7 files changed, 4 insertions(+), 18 deletions(-) diff --git a/src/domain_conf.c b/src/domain_conf.c index 82c0ee6..6340c2a 100644 --- a/src/domain_conf.c +++ b/src/domain_conf.c @@ -696,7 +696,6 @@ virDomainNetDefParseXML(virConnectPtr conn, if (STRPREFIX((const char*)ifname, vnet)) { /* An auto-generated target name, blank it out */ VIR_FREE(ifname); -ifname = NULL; } } else if ((script == NULL) (def-type == VIR_DOMAIN_NET_TYPE_ETHERNET) @@ -954,10 +953,8 @@ virDomainChrDefParseXML(virConnectPtr conn, bindService = virXMLPropString(cur, service); } -if (def-type == VIR_DOMAIN_CHR_TYPE_UDP) { +if (def-type == VIR_DOMAIN_CHR_TYPE_UDP) VIR_FREE(mode); -mode = NULL; -} } } else if (xmlStrEqual(cur-name, BAD_CAST protocol)) { if (protocol == NULL) diff --git a/src/iptables.c b/src/iptables.c index e7613e2..3e3a1a2 100644 --- a/src/iptables.c +++ b/src/iptables.c @@ -266,14 +266,12 @@ static void iptRuleFree(iptRule *rule) { VIR_FREE(rule-rule); -rule-rule = NULL; if (rule-argv) { int i = 0; while (rule-argv[i]) VIR_FREE(rule-argv[i++]); VIR_FREE(rule-argv); -rule-argv = NULL; } } diff --git a/src/storage_backend.c b/src/storage_backend.c index 3e4e39c..a164a08 100644 --- a/src/storage_backend.c +++ b/src/storage_backend.c @@ -444,10 +444,8 @@ virStorageBackendRunProgRegex(virConnectPtr conn, goto cleanup; /* Release matches restart to matching the first regex */ -for (j = 0 ; j totgroups ; j++) { +for (j = 0 ; j totgroups ; j++) VIR_FREE(groups[j]); -groups[j] = NULL; -} maxReg = 0; ngroup = 0; } diff --git a/src/test.c b/src/test.c index d0bb003..b7b9df0 100644 --- a/src/test.c +++ b/src/test.c @@ -461,10 +461,8 @@ static int testOpenFromFile(virConnectPtr conn, dom-def-id = privconn-nextDomID++; dom-persistent = 1; } -if (domains != NULL) { +if (domains != NULL) VIR_FREE(domains); -domains = NULL; -} ret = virXPathNodeSet(/node/network, ctxt, networks); if (ret 0) { @@ -498,10 +496,8 @@ static int testOpenFromFile(virConnectPtr conn, net-persistent = 1; } -if (networks != NULL) { +if (networks != NULL) VIR_FREE(networks); -networks = NULL; -} xmlXPathFreeContext(ctxt); xmlFreeDoc(xml); diff --git a/src/xm_internal.c b/src/xm_internal.c index 3b264a8..60f32fe 100644 --- a/src/xm_internal.c +++ b/src/xm_internal.c @@ -2925,7 +2925,6 @@ xenXMAttachInterface(virDomainPtr domain, xmlXPathContextPtr ctxt, int hvm, if (virMacAddrCompare (dommac, (const char *) mac) == 0) { if (autoassign) { VIR_FREE(mac); -mac = NULL; if (!(mac = (xmlChar *)xenXMAutoAssignMac())) goto cleanup; /* initialize the list */ diff --git a/src/xml.c b/src/xml.c index d5730ed..477d466 100644 --- a/src/xml.c +++ b/src/xml.c @@ -1200,7 +1200,6 @@ virDomainParseXMLOSDescHVM(virConnectPtr conn, xmlNodePtr node, xmlFree(bus); } VIR_FREE(nodes); -nodes = NULL; } cur = virXPathNode(/domain/devices/parallel[1], ctxt); diff --git a/src/xmlrpc.c b/src/xmlrpc.c index d627607..cbca389 100644 --- a/src/xmlrpc.c +++ b/src/xmlrpc.c @@ -443,7 +443,6 @@ static char *xmlRpcCallRaw(const
[libvirt] Re: [PATCH] remove unnecessary V = NULL; stmts after VIR_FREE(V)
On Thu, Jul 17, 2008 at 12:11:13PM +0200, Jim Meyering wrote: Doing a review (in progress), I spotted one of these, so went in search of others. They're harmless, so this is more a heads up than anything else. I am happy to defer application until the patch queue has been reduced. From d97af7667699529c216835d806d1f0c6f698a70d Mon Sep 17 00:00:00 2001 From: Jim Meyering [EMAIL PROTECTED] Date: Thu, 17 Jul 2008 12:05:44 +0200 Subject: [PATCH] remove unnecessary V = NULL; stmts after VIR_FREE(V) * src/domain_conf.c (virDomainChrDefParseXML) (virDomainNetDefParseXML): Likewise. * src/iptables.c (iptRuleFree): Likewise. * src/storage_backend.c (virStorageBackendRunProgRegex): Likewise. * src/test.c (testOpenFromFile): Likewise. * src/xm_internal.c (xenXMAttachInterface): Likewise. * src/xml.c (virDomainParseXMLOSDescHVM): Likewise. * src/xmlrpc.c (xmlRpcCallRaw): Likewise. ACK for all except the two that touch xm_internal.c, and xml.c since they'll cause really painful conflicts with my Xen driver refactoring, and chances are I've removed them anyway. Regards, Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] cpu usage in OpenVZ
OpenVZ calculate statistics and allow to get them . Added function for getting cpu usage of container. Index: src/openvz_driver.c === RCS file: /data/cvs/libvirt/src/openvz_driver.c,v retrieving revision 1.31 diff -u -p -r1.31 openvz_driver.c --- src/openvz_driver.c 16 Jul 2008 20:42:38 - 1.31 +++ src/openvz_driver.c 17 Jul 2008 10:50:15 - @@ -94,6 +94,8 @@ static int openvzDomainUndefine(virDomai static int convCmdbufExec(char cmdbuf[], char *cmdExec[]); static void cmdExecFree(char *cmdExec[]); +static int openvzGetProcessInfo(unsigned long long *cpuTime, int vpsid); + struct openvz_driver ovz_driver; static int convCmdbufExec(char cmdbuf[], char *cmdExec[]) @@ -279,6 +281,15 @@ static int openvzDomainGetInfo(virDomain info-state = vm-status; +if (!openvzIsActiveVM(vm)) { +info-cpuTime = 0; +} else { +if (openvzGetProcessInfo((info-cpuTime), dom-id) 0) { +openvzError(dom-conn, VIR_ERR_OPERATION_FAILED, (cannot read cputime for domain)); +return -1; +} +} + /* TODO These need to be calculated differently for OpenVZ */ //info-cpuTime = //info-maxMem = vm-def-maxmem; @@ -689,6 +700,48 @@ static int openvzListDefinedDomains(virC return got; } +static int openvzGetProcessInfo(unsigned long long *cpuTime, int vpsid) { +int fd; +char line[PATH_MAX] ; +unsigned long long usertime, systime, nicetime; +int readvps = 0, ret; + +/* read statistic from /proc/vz/vestat. +sample: +Version: 2.2 + VEID user nice system uptime idle other +33 78 0 1330 59454597 142650441835148 other +*/ + +fd = open(/proc/vz/vestat, O_RDONLY); +if (fd == -1) +return -1; + +while(1) { +ret = openvz_readline(fd, line, sizeof(line)); +if(ret = 0) +break; + +if (sscanf(line, %d %llu %llu %llu, readvps, usertime, nicetime, systime) != 4) +continue; + +if (readvps == vpsid) +break; +} + +close(fd); +if (ret 0) +return -1; + +if (readvps != vpsid) /*not found*/ +return -1; + +/* convert jiffies to nanoseconds */ +*cpuTime = 1000ull * 1000ull * 1000ull * (usertime + nicetime + systime) / (unsigned long long)sysconf(_SC_CLK_TCK); + +return 0; +} + static int openvzNumDefinedDomains(virConnectPtr conn ATTRIBUTE_UNUSED) { return ovz_driver.num_inactive; } -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] cpu usage in OpenVZ
On Thu, Jul 17, 2008 at 03:04:46PM +0400, Evgeniy Sokolov wrote: OpenVZ calculate statistics and allow to get them . Added function for getting cpu usage of container. Modular some minor comments, ACK for this patch. +if (!openvzIsActiveVM(vm)) { +info-cpuTime = 0; +} else { +if (openvzGetProcessInfo((info-cpuTime), dom-id) 0) { +openvzError(dom-conn, VIR_ERR_OPERATION_FAILED, (cannot read cputime for domain)); Need to have a leading '_' in front of the string to be mark for translation. +return -1; +} +} + /* TODO These need to be calculated differently for OpenVZ */ //info-cpuTime = //info-maxMem = vm-def-maxmem; @@ -689,6 +700,48 @@ static int openvzListDefinedDomains(virC return got; } +static int openvzGetProcessInfo(unsigned long long *cpuTime, int vpsid) { +int fd; +char line[PATH_MAX] ; Best to use something else as the size here - we're not reading a path we should try to eliminate uses of PATH_MAX in libvirt, since POSIX allows for it to be undefine, or stupidly huge. I reckon 1024 would do the job for line length in this case. +if (readvps != vpsid) /*not found*/ +return -1; + +/* convert jiffies to nanoseconds */ +*cpuTime = 1000ull * 1000ull * 1000ull * (usertime + nicetime + systime) / (unsigned long long)sysconf(_SC_CLK_TCK); Can we break this expression across multiple lines to avoid going soo far over 80 chars. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] repeat lookup by name in LookupByID
On Thu, Jul 17, 2008 at 10:30:19AM +0100, Daniel P. Berrange wrote: On Thu, Jul 17, 2008 at 01:20:59PM +0400, Evgeniy Sokolov wrote: Daniel Veillard ?: On Wed, Jul 16, 2008 at 08:10:21PM +0100, Daniel P. Berrange wrote: Yes, the documentation is wrong - all inactive VMs have an ID of -1, and thus lookup-by-ID is nonsensical for inactive VMs. If any application did make use of this change which falls back to lookup-by-name, then it would only ever work with OpenVZ and not any of the other libvirt drivers, which isn't useful behaviour. [...] Then the virLookupById description must be updated, I'm not against it, but we need to be coherent. Indeed, the docs need to be clarified. okay, what about * Try to find a domain based on the hypervisor ID number * Note that this won't work for inactive domains which have an ID of -1, * in that case a lookup based on the Name or UUId need to be done instead. Ok. In that case we may disable lookup-by-id in undefine subcommand. A, so that's why you were seeing the error. Yes, this makes sens because a VM has to be shutoff before undefine is allowed. Okay, understood now :-) Applied and commited ! Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ [EMAIL PROTECTED] | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/ -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Fix pool-create for netfs format 'auto'
On Wed, Jul 16, 2008 at 01:30:33PM -0400, Cole Robinson wrote: Trying to pool-create a netfs pool with the format type 'auto' (as in, to autodetect the format) runs the command mount -t auto munged-source-path '-t auto' seems to do its job for regular file systems, but actually fails with nfs or cifs (I assume anything that required an external mount program). Strangely though, the command mount munged-source-path will do the right thing. The attached patch fixes the generated command to work in the above case, fully removing the '-t type' piece if auto is specified for a netfs pool. I tested the intended case, as well as regular fs pools format=auto, and netfs format=nfs, and all seemed to work fine. ACK. I actually thought we'd fixed this already, since Chris came across it a while back.. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] some unsorted questions
Daniel P. Berrange schreef: In the case of the Xen drivers, this requires O(n) calls to XenD which are rather expensive. XenD does actually have ability to return data about all domains in a single request. So if we had an API for fetching all domains at once it'd only require O(1) expensive XenD calls. I'd imagine something like this int virConnectListAllDomains(virConnectPtr conn, virDomainPtr **domains, int stateflags); The 'stateflags' parameter would be a bit-field where each bit corresponded to one of the virDomainState enumeration values. The 'domains' list would be allocated by libvirt, and filled in with all the domain objects, and the total number of domains as the return value. Yes; initially I was looking for something like this. I think this is a great idea. Now I wonder, if it could be faster (with the lookup mechanism) to check if the domain exists, before connecting to it. Stefan -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] some unsorted questions
Daniel P. Berrange wrote: On Thu, Jul 17, 2008 at 01:29:45AM +0200, Stefan de Konink wrote: If this gets implemented I would suggest a call that fetches all domains from a running system and not only the defined or only the active ones. This is a good idea regardless. The current APIs require an application todo num = list number of domains for 1 to num lookup domain by id In the case of the Xen drivers, this requires O(n) calls to XenD which are rather expensive. XenD does actually have ability to return data about all domains in a single request. So if we had an API for fetching all domains at once it'd only require O(1) expensive XenD calls. I'd imagine something like this int virConnectListAllDomains(virConnectPtr conn, virDomainPtr **domains, int stateflags); I've thought about something similar as well. It would also be useful to have a command like ListUUIDs with similar state flags, giving apps a more light weight manner of polling for domain state changes (ex. virt-manager) - Cole -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] some unsorted questions
On Thu, Jul 17, 2008 at 10:22:05AM -0400, Cole Robinson wrote: Daniel P. Berrange wrote: On Thu, Jul 17, 2008 at 01:29:45AM +0200, Stefan de Konink wrote: If this gets implemented I would suggest a call that fetches all domains from a running system and not only the defined or only the active ones. This is a good idea regardless. The current APIs require an application todo num = list number of domains for 1 to num lookup domain by id In the case of the Xen drivers, this requires O(n) calls to XenD which are rather expensive. XenD does actually have ability to return data about all domains in a single request. So if we had an API for fetching all domains at once it'd only require O(1) expensive XenD calls. I'd imagine something like this int virConnectListAllDomains(virConnectPtr conn, virDomainPtr **domains, int stateflags); I've thought about something similar as well. It would also be useful to have a command like ListUUIDs with similar state flags, giving apps a more light weight manner of polling for domain state changes (ex. virt-manager) A plain ListUUIDs wouldn't be any more efficient - we'd still have to hit either Xenstore or XenD to get the listing, and once you're doing that you may as well return the name ID too in a virDomainPtr object Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Fix pool-create for netfs format 'auto'
Daniel P. Berrange wrote: ACK. I actually thought we'd fixed this already, since Chris came across it a while back.. I thought so too, but then I think I remember that I didn't actually change the code, just configured around it. In any case, this seems to be a good change to me too, so ACK. Chris Lalancette -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] RFC? finding potential storage pool resources
Hi - I'm looking into using (which I think means extending) libvirt to enumerate potential storage pool resources, in particular: * existing physical disk device names (for creating disk pools) * existing logical volume group names (for creating logical pools) Note that List{Defined,Active}StorageGroups don't do the trick. Suppose this is a new host and I'm trying to start defining the storage pools (and I want to be able to use existing volume groups, for example). I don't see how to do that within the current libvirt framework. If I'm missing something, please let me know (and ignore the rest of this message ...). This could be done by adding some new calls like: int virConnectListPhysDisks(virConnectPtr conn, char ** const name, int maxnames) int virConnectListLogicalVolGroups(virConnectPtr conn, char ** const name, int maxnames) ... plus a pair of NumOf functions ... But these are each storage-driver specific. For example, if I'm not using the logical storage driver, I have no need (or means) of listing volume groups. So maybe it's cleaner to fold these two functions into one, now parameterized by storage driver type: int virConnectListStorageSources(virConnectPtr conn, const char *type, char ** const name, int maxnames) ... plus a NumOf function ... where type is one of the supported storage pool types. So, if type is disk, ListStorageSources acts like ListPhysDisks, and if type is logical, ListStorageSources acts like ListLogicalVolumeGroups, (and we return empty lists or some sort of unsupported error for any other types ... can't list all possible network servers, for instance). What do you all think? Thanks, Dave -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] RFC? finding potential storage pool resources
On Thu, Jul 17, 2008 at 05:28:01PM -0400, David Lively wrote: Hi - I'm looking into using (which I think means extending) libvirt to enumerate potential storage pool resources, in particular: * existing physical disk device names (for creating disk pools) * existing logical volume group names (for creating logical pools) Note that List{Defined,Active}StorageGroups don't do the trick. Suppose this is a new host and I'm trying to start defining the storage pools (and I want to be able to use existing volume groups, for example). I don't see how to do that within the current libvirt framework. If I'm missing something, please let me know (and ignore the rest of this message ...). You're not missing anything - this is a TODO item. When I wrote the original storage APIs, I had a prototype http://www.redhat.com/archives/libvir-list/2008-February/msg00107.html http://www.redhat.com/archives/libvir-list/2008-February/msg00108.html int virConnectDiscoverStoragePools(virConnectPtr conn, const char *hostname, const char *type, unsigned int flags, char ***xmlDesc); Which was intended to probe for available storage of the requested type (eg, LVM, vs disks, vs iSCSI targets, etc, etc), and return a list of XML documents describing each discovered object. This could be fed into the API virStoragePoolDefineXML. I didn't include this in the end, because I wasn't happy with the API contract. For example, it only allows a hostname to be specified as metadata, but it may be desirable to include a port number as well for network based storage. This could be done by adding some new calls like: int virConnectListPhysDisks(virConnectPtr conn, char ** const name, int maxnames) ??? int virConnectListLogicalVolGroups(virConnectPtr conn, char ** const name, int maxnames) ... plus a pair of NumOf functions ... But these are each storage-driver specific. For example, if I'm not using the logical storage driver, I have no need (or means) of listing volume groups. So maybe it's cleaner to fold these two functions into one, now parameterized by storage driver type: int virConnectListStorageSources(virConnectPtr conn, const char *type, char ** const name, int maxnames) ... plus a NumOf function ... where type is one of the supported storage pool types. Yes, I definitely want the discovery API to be able to handle disks, LVM, iSCSI, FibreChanel, NFS - basically everything in one. That said, in the case of physical disks, we may well end up with a parallel way to discover disk device names, via generic hardware device enumeration APIs http://www.redhat.com/archives/libvir-list/2008-April/msg5.html So, if type is disk, ListStorageSources acts like ListPhysDisks, and if type is logical, ListStorageSources acts like ListLogicalVolumeGroups, (and we return empty lists or some sort of unsupported error for any other types ... can't list all possible network servers, for instance). For network sources I anticipated that you'd provide a hostname when triggering discovery. For NFS, this is sufficient to let you query all exported volumes. For iSCSI this lets you query available target names. Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH]: Add MIGRATE_LIVE definition to ruby-libvirt bindings
On Mon, 2008-07-07 at 17:28 +0200, Chris Lalancette wrote: Attached is a trivial patch to add the MIGRATE_LIVE flag into the ruby-libvirt bindings. Signed-off-by: Chris Lalancette [EMAIL PROTECTED] ACK .. Committed. (1) Do you need a new ruby-libvirt release for this ? (2) How far back has 'enum virDomainMigrateFlags' been around ? IOW, do I need to worry about compilation breaking on old libvirt releases ? David -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list