Re: [libvirt] [PATCH v3] leaseshelper: improvements to support all events

2014-08-29 Thread Nehal J Wani
On Wed, Aug 20, 2014 at 4:23 PM, Roman Bogorodskiy
 wrote:
>   Nehal J Wani wrote:
>
>> This patch enables the helper program to detect event(s) triggered when there
>> is a change in lease length or expiry and client-id. This transfers complete
>> control of leases database to libvirt and obsoletes use of the lease database
>> file (.leases). That file will not be created, read, or 
>> written.
>> This is achieved by adding the option --leasefile-ro to dnsmasq and passing a
>> custom env var to leaseshelper, which helps us map events related to leases
>> with their corresponding network bridges, no matter what the event be.
>>
>> Also, this requires the addition of a new non-lease entry in our custom lease
>> database: "server-duid". It is required to identify a DHCPv6 server.
>>
>> Now that dnsmasq doesn't maintain its own leases database, it relies on our
>> helper program to tell it about previous leases and server duid. Thus, this
>> patch makes our leases program honor an extra action: "init", in which it 
>> sends
>> the known info in a particular format to dnsmasq by printing it to stdout.
>>
>> ---
>>
>>  This is compatible with libvirt 1.2.6 as only additions have been
>>  introduced, and the old leases file (*.status) will still be supported.
>>
>>  v3: * Add server-duid as an entry in the lease object for every ipv6 lease.
>>  * Remove unnecessary variables and double copies.
>>  * Take value from DNSMASQ_OLD_HOSTNAME if hostname is not known.
>>
>>  v2: http://www.redhat.com/archives/libvir-list/2014-July/msg01109.html
>>
>>  v1: https://www.redhat.com/archives/libvir-list/2014-July/msg00568.html
>>
>
> JFI: I did a basic testing and it's working fine for me so far.
>
>> diff --git a/src/network/leaseshelper.c b/src/network/leaseshelper.c
>> index c8543a2..e984cbb 100644
>> --- a/src/network/leaseshelper.c
>> +++ b/src/network/leaseshelper.c
>
>
>> @@ -176,6 +188,10 @@ main(int argc, char **argv)
>>  if (argc == 5)
>>  hostname = argv[4];
>>
>> +/* In case hostname is still unkown, use the last known one */
>
> s/unkown/unknown/
>
>> +if (!hostname)
>> +hostname = virGetEnvAllowSUID("DNSMASQ_OLD_HOSTNAME");
>> +
>
> Roman Bogorodskiy

Ping!

-- 
Nehal J Wani

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Cosmetic bug on libvirt.org

2014-08-29 Thread Roman Bogorodskiy
  Christophe Fergeau wrote:

> Hey,
> 
> I just noticed that clicking on "FAQ" in the left sidebar on libvirt.org
> main page highlights the "Wiki" cell instead of highlighting "FAQ".
> I have no idea how the website works, so I'm just reporting it here.

FAQ is not a part of the generated documentation but just a Wiki page.
So on one hand it's logical as when viewing a FAQ user is on Wiki, but
on the other hand it looks a little confusing indeed.

Roman Bogorodskiy


pgpx5F6RhLvlH.pgp
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 1.2.8] storage: zfs: fix double listing of new volumes

2014-08-29 Thread Roman Bogorodskiy
  John Ferlan wrote:

> 
> 
> On 08/27/2014 05:02 AM, Roman Bogorodskiy wrote:
> > Currently, after calling commands to create a new volumes,
> > virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that
> > calls virStorageBackendZFSParseVol.
> > 
> > virStorageBackendZFSParseVol checks if a volume already exists by
> > trying to get it using virStorageVolDefFindByName.
> > 
> > For a just created volume it returns NULL, so volume is reported as
> > new and appended to pool->volumes. This causes a volume to be listed
> > twice as storageVolCreateXML appends this new volume to the list as
> > well.
> > 
> > Fix that by passing a new volume definition to
> > virStorageBackendZFSParseVol so it could determine if it needs to add
> > this volume to the list.
> > ---
> >  src/storage/storage_backend_zfs.c | 45 
> > ++-
> >  1 file changed, 26 insertions(+), 19 deletions(-)
> > 
> 
> ACK
> 
> Although it seems the "main" reason the create backend called the
> FindVols was to ascertain if the volume was successfully created via the
> virCommandRun call.
> 
> I believe this is safe for 1.2.8

Pushed, thanks!

Yeah, this schema looks a little awkward (that's not an excuse for me
placing a bug :-) ), but on the other hand, currently there's no
function to retrieve information about specific volume and probably it
doesn't make sense to introduce one.

Roman Bogorodskiy


pgp007KW8g6aW.pgp
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [Qemu-devel] IO accounting overhaul

2014-08-29 Thread Benoît Canet
The Friday 29 Aug 2014 à 17:04:46 (+0100), Stefan Hajnoczi wrote :
> On Thu, Aug 28, 2014 at 04:38:09PM +0200, Benoît Canet wrote:
> > I collected some items of a cloud provider wishlist regarding I/O accouting.
> > 
> > In a cloud I/O accouting can have 3 purpose: billing, helping the customers
> > and doing metrology to help the cloud provider seeks hidden costs.
> > 
> > I'll cover the two former topic in this mail because they are the most 
> > important
> > business wize.
> > 
> > 1) prefered place to collect billing IO accounting data:
> > 
> > For billing purpose the collected data must be as close as possible to what 
> > the
> > customer would see by using iostats in his vm.
> > 
> > The first conclusion we can draw is that the choice of collecting IO 
> > accouting
> > data used for billing in the block devices models is right.
> 
> I agree.  When statistics are collected at lower layers it becomes are
> for the end user to understand numbers that include hidden costs for
> image formats, network protocols, etc.
> 
> > 2) what to do with occurences of rare events:
> > -
> > 
> > Another point is that QEMU developpers agree that they don't know which 
> > policy
> > to apply to some I/O accounting events.
> > Must QEMU discard invalid I/O write IO or account them as done ?
> > Must QEMU count a failed read I/O as done ?
> > 
> > When discusting this with a cloud provider the following appears: these 
> > decisions
> > are really specific to each cloud provider and QEMU should not implement 
> > them.
> > The right thing to do is to add accouting counters to collect these events.
> > 
> > Moreover these rare events are precious troubleshooting data so it's an 
> > additional
> > reason not to toss them.
> 
> Sounds good, network interface statistics also include error counters.
> 
> > 3) list of block I/O accouting metrics wished for billing and helping the 
> > customers
> > ---
> > 
> > Basic I/O accouting data will end up making the customers bills.
> > Extra I/O accouting informations would be a precious help for the cloud 
> > provider
> > to implement a monitoring panel like Amazon Cloudwatch.
> 
> One thing to be aware of is that counters inside QEMU cannot be trusted.
> If a malicious guest can overwrite memory in QEMU then the counters can
> be manipulated.
> 
> For most purposes this should be okay.  Just be aware that evil guests
> could manipulate their counters if a security hole is found in QEMU.
> 
> > Here is the list of counters and statitics I would like to help implement 
> > in QEMU.
> > 
> > This is the most important part of the mail and the one I would like the 
> > community
> > review the most.
> > 
> > Once this list is settled I would proceed to implement the required 
> > infrastructure
> > in QEMU before using it in the device models.
> > 
> > /* volume of data transfered by the IOs */
> > read_bytes
> > write_bytes
> > 
> > /* operation count */
> > read_ios
> > write_ios
> > flush_ios
> > 
> > /* how many invalid IOs the guest submit */
> > invalid_read_ios
> > invalid_write_ios
> > invalid_flush_ios
> > 
> > /* how many io error happened */
> > read_ios_error
> > write_ios_error
> > flush_ios_error
> > 
> > /* account the time passed doing IOs */
> > total_read_time
> > total_write_time
> > total_flush_time
> > 
> > /* since when the volume is iddle */
> > qvolume_iddleness_time
> 
> ?

s/qv/v/

It's the time the volume spent being iddle.
Amazon report it in it's tools.

> 
> > 
> > /* the following would compute latecies for slices of 1 seconds then toss 
> > the
> >  * result and start a new slice. A weighted sumation of the instant 
> > latencies
> >  * could help to implement this.
> >  */
> > 1s_read_average_latency
> > 1s_write_average_latency
> > 1s_flush_average_latency
> > 
> > /* the former three numbers could be used to further compute a 1 minute 
> > slice value */
> > 1m_read_average_latency
> > 1m_write_average_latency
> > 1m_flush_average_latency
> > 
> > /* the former three numbers could be used to further compute a 1 hours 
> > slice value */
> > 1h_read_average_latency
> > 1h_write_average_latency
> > 1h_flush_average_latency
> > 
> > /* 1 second average number of requests in flight */
> > 1s_read_queue_depth
> > 1s_write_queue_depth
> > 
> > /* 1 minute average number of requests in flight */
> > 1m_read_queue_depth
> > 1m_write_queue_depth
> > 
> > /* 1 hours average number of requests in flight */
> > 1h_read_queue_depth
> > 1h_write_queue_depth
> 
> I think libvirt captures similar data.  At least virt-manager displays
> graphs with similar data (maybe for CPU, memory, or network instead of
> disk).
> 
> > 4) Making this happen
> > -
> > 
> > Outscale want to make these IO stat happen and gave me the go to do whatever
> > grunt is required to do

Re: [libvirt] [PATCH] Fix connection to already running session libvirtd

2014-08-29 Thread Eric Blake
On 08/29/2014 06:17 AM, Christophe Fergeau wrote:
> Hey,
> 
> On Fri, Aug 29, 2014 at 11:08:53AM +0200, Martin Kletzander wrote:
>> Although my git was a bit confused by the diff included in the commit
>> message.  I'd suggest just saying that most of the commit is a
>> whitespace change; people can see that using '-w' themselves.  That
>> toggle should even work with format-patch, but I'm not sure that
>> applies cleanly all the time.
> 
> Ah I did not know git format-patch has such as switch. This produces a
> readable patch, but it's indeed not possible to apply it cleanly. I
> could have sent the readable version in reply to the full version
> though.
> 
> I've removed the diff from the commit log, but on second thought I
> forgot to remove one 'always true' test so I'll send a v2.

Sometimes, when a patch is that invasive, I'll do it in two parts - the
change with wrong indentation, followed by another patch that is
indentation-only.  Much easier to review.

Also, are you using the patience algorithm with git? (git config
diff.algorithm patience)  It makes indentation patches easier to review,
by not trying to interleave lone } and blank lines that correspond to
different code from pre- and post-patch.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [Qemu-devel] IO accounting overhaul

2014-08-29 Thread Stefan Hajnoczi
On Thu, Aug 28, 2014 at 04:38:09PM +0200, Benoît Canet wrote:
> I collected some items of a cloud provider wishlist regarding I/O accouting.
> 
> In a cloud I/O accouting can have 3 purpose: billing, helping the customers
> and doing metrology to help the cloud provider seeks hidden costs.
> 
> I'll cover the two former topic in this mail because they are the most 
> important
> business wize.
> 
> 1) prefered place to collect billing IO accounting data:
> 
> For billing purpose the collected data must be as close as possible to what 
> the
> customer would see by using iostats in his vm.
> 
> The first conclusion we can draw is that the choice of collecting IO accouting
> data used for billing in the block devices models is right.

I agree.  When statistics are collected at lower layers it becomes are
for the end user to understand numbers that include hidden costs for
image formats, network protocols, etc.

> 2) what to do with occurences of rare events:
> -
> 
> Another point is that QEMU developpers agree that they don't know which policy
> to apply to some I/O accounting events.
> Must QEMU discard invalid I/O write IO or account them as done ?
> Must QEMU count a failed read I/O as done ?
> 
> When discusting this with a cloud provider the following appears: these 
> decisions
> are really specific to each cloud provider and QEMU should not implement them.
> The right thing to do is to add accouting counters to collect these events.
> 
> Moreover these rare events are precious troubleshooting data so it's an 
> additional
> reason not to toss them.

Sounds good, network interface statistics also include error counters.

> 3) list of block I/O accouting metrics wished for billing and helping the 
> customers
> ---
> 
> Basic I/O accouting data will end up making the customers bills.
> Extra I/O accouting informations would be a precious help for the cloud 
> provider
> to implement a monitoring panel like Amazon Cloudwatch.

One thing to be aware of is that counters inside QEMU cannot be trusted.
If a malicious guest can overwrite memory in QEMU then the counters can
be manipulated.

For most purposes this should be okay.  Just be aware that evil guests
could manipulate their counters if a security hole is found in QEMU.

> Here is the list of counters and statitics I would like to help implement in 
> QEMU.
> 
> This is the most important part of the mail and the one I would like the 
> community
> review the most.
> 
> Once this list is settled I would proceed to implement the required 
> infrastructure
> in QEMU before using it in the device models.
> 
> /* volume of data transfered by the IOs */
> read_bytes
> write_bytes
> 
> /* operation count */
> read_ios
> write_ios
> flush_ios
> 
> /* how many invalid IOs the guest submit */
> invalid_read_ios
> invalid_write_ios
> invalid_flush_ios
> 
> /* how many io error happened */
> read_ios_error
> write_ios_error
> flush_ios_error
> 
> /* account the time passed doing IOs */
> total_read_time
> total_write_time
> total_flush_time
> 
> /* since when the volume is iddle */
> qvolume_iddleness_time

?

> 
> /* the following would compute latecies for slices of 1 seconds then toss the
>  * result and start a new slice. A weighted sumation of the instant latencies
>  * could help to implement this.
>  */
> 1s_read_average_latency
> 1s_write_average_latency
> 1s_flush_average_latency
> 
> /* the former three numbers could be used to further compute a 1 minute slice 
> value */
> 1m_read_average_latency
> 1m_write_average_latency
> 1m_flush_average_latency
> 
> /* the former three numbers could be used to further compute a 1 hours slice 
> value */
> 1h_read_average_latency
> 1h_write_average_latency
> 1h_flush_average_latency
> 
> /* 1 second average number of requests in flight */
> 1s_read_queue_depth
> 1s_write_queue_depth
> 
> /* 1 minute average number of requests in flight */
> 1m_read_queue_depth
> 1m_write_queue_depth
> 
> /* 1 hours average number of requests in flight */
> 1h_read_queue_depth
> 1h_write_queue_depth

I think libvirt captures similar data.  At least virt-manager displays
graphs with similar data (maybe for CPU, memory, or network instead of
disk).

> 4) Making this happen
> -
> 
> Outscale want to make these IO stat happen and gave me the go to do whatever
> grunt is required to do so.
> That said we could collaborate on some part of the work.

Seems like a nice improvement to the query-blockstats available today.

CCing libvirt for management stack ideas.

Stefan


pgpVLhZjqKJZo.pgp
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] Availability of release candidate 2 for libvirt-1.2.8

2014-08-29 Thread Daniel Veillard
  Also tagged, with signed tarball and rpms ;-) at the usual place:
ftp://libvirt.org/libvirt/

there were quite a lot of small changes since rc1, so please give it
some testing, also for portability to other systems/OSes.
If everything goes well I will push the final 1.2.8 on Tuesday morning

  thanks!

Daniel

-- 
Daniel Veillard  | Open Source and Standards, Red Hat
veill...@redhat.com  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] conf: Check migration_host is valid or not during libvirt restarts

2014-08-29 Thread Jiri Denemark
On Fri, Aug 29, 2014 at 19:20:58 +0800, Chen Fan wrote:
> if user specified an invalid strings as migration hostname,
> like setting: migration_host = "XXX", libvirt should check
> it and return error during lbivirt restart.
> 
> Signed-off-by: Chen Fan 
> ---
>  src/qemu/qemu_conf.c | 40 
>  1 file changed, 40 insertions(+)
> 
> diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
> index e2ec54f..450ac5b 100644
> --- a/src/qemu/qemu_conf.c
> +++ b/src/qemu/qemu_conf.c
> @@ -33,6 +33,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "virerror.h"
>  #include "qemu_conf.h"
> @@ -650,6 +651,45 @@ int virQEMUDriverConfigLoadFile(virQEMUDriverConfigPtr 
> cfg,
>  GET_VALUE_LONG("seccomp_sandbox", cfg->seccompSandbox);
>  
>  GET_VALUE_STR("migration_host", cfg->migrateHost);
> +if (cfg->migrateHost) {
> +struct addrinfo hints;
> +struct addrinfo *res;
> +
> +memset(&hints, 0, sizeof(hints));
> +hints.ai_flags = AI_ADDRCONFIG;
> +hints.ai_family = AF_UNSPEC;
> +
> +if (getaddrinfo(cfg->migrateHost, NULL, &hints, &res) != 0) {
> +virReportError(VIR_ERR_CONF_SYNTAX,
> +   _("migration_host: '%s' is not a valid hostname"),
> +   cfg->migrateHost);
> +goto cleanup;
> +}
> +
> +if (res == NULL) {
> +virReportError(VIR_ERR_CONF_SYNTAX,
> +   _("No IP address for host '%s' found"),
> +   cfg->migrateHost);
> +goto cleanup;
> +}
> +
> +freeaddrinfo(res);

I don't think this is a good idea. What if it's in fact valid and just
can't be resolved due to a temporary issue? It should only fail at the
time someone actually tries to migrate a domain.

> +
> +if (STRPREFIX(cfg->migrateHost, "localhost")) {
> +virReportError(VIR_ERR_CONF_SYNTAX, "%s",
> +   _("setting migration_host to 'localhost' is not 
> allowed"));
> +goto cleanup;
> +}
> +
> +if (STREQ(cfg->migrateHost, "127.0.0.1") ||
> +STREQ(cfg->migrateHost, "::1")) {
> +virReportError(VIR_ERR_CONF_SYNTAX, "%s",
> +   _("setting migration_host to '127.0.0.1' or '::1' 
> "
> + "is not allowed"));
> +goto cleanup;
> +}
> +}
> +

Checking for localhost could make sense.

>  GET_VALUE_STR("migration_address", cfg->migrationAddress);
>  
>  GET_VALUE_BOOL("log_timestamp", cfg->logTimestamp);

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2] Fix connection to already running session libvirtd

2014-08-29 Thread Christophe Fergeau
Since 1b807f92, connecting with virsh to an already running session
libvirtd fails with:
$ virsh list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
connected

This is caused by a logic error in virNetSocketNewConnectUnix: even if
the connection to the daemon socket succeeded, we still try to spawn the
daemon and then connect to it.
This commit changes the logic to not try to spawn libvirtd if we
successfully connected to its socket.

Most of this commit is whitespace changes, use of -w is used to look at
it.
---
Changes since v1:
- Removed now redundant test in the else branch


 src/rpc/virnetsocket.c | 102 +
 1 file changed, 52 insertions(+), 50 deletions(-)

diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 79258ef..9780e17 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -573,65 +573,67 @@ int virNetSocketNewConnectUNIX(const char *path,
 remoteAddr.data.un.sun_path[0] = '\0';
 
  retry:
-if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 && !spawnDaemon) {
-virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
-goto error;
-} else if (spawnDaemon) {
-int status = 0;
-pid_t pid = 0;
-
-if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
-virReportSystemError(errno, "%s", _("Failed to create socket"));
+if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+if (!spawnDaemon) {
+virReportSystemError(errno, _("Failed to connect socket to '%s'"),
+ path);
 goto error;
-}
+} else {
+int status = 0;
+pid_t pid = 0;
 
-/*
- * We have to fork() here, because umask() is set
- * per-process, chmod() is racy and fchmod() has undefined
- * behaviour on sockets according to POSIX, so it doesn't
- * work outside Linux.
- */
-if ((pid = virFork()) < 0)
-goto error;
+if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
+virReportSystemError(errno, "%s", _("Failed to create 
socket"));
+goto error;
+}
 
-if (pid == 0) {
-umask(0077);
-if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
-_exit(EXIT_FAILURE);
+/*
+ * We have to fork() here, because umask() is set
+ * per-process, chmod() is racy and fchmod() has undefined
+ * behaviour on sockets according to POSIX, so it doesn't
+ * work outside Linux.
+ */
+if ((pid = virFork()) < 0)
+goto error;
 
-_exit(EXIT_SUCCESS);
-}
+if (pid == 0) {
+umask(0077);
+if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
+_exit(EXIT_FAILURE);
 
-if (virProcessWait(pid, &status, false) < 0)
-goto error;
+_exit(EXIT_SUCCESS);
+}
 
-if (status != EXIT_SUCCESS) {
-/*
- * OK, so the subprocces failed to bind() the socket.  This may 
mean
- * that another daemon was starting at the same time and succeeded
- * with its bind().  So we'll try connecting again, but this time
- * without spawning the daemon.
- */
-spawnDaemon = false;
-goto retry;
-}
+if (virProcessWait(pid, &status, false) < 0)
+goto error;
 
-if (listen(passfd, 0) < 0) {
-virReportSystemError(errno, "%s",
- _("Failed to listen on socket that's about "
-   "to be passed to the daemon"));
-goto error;
-}
+if (status != EXIT_SUCCESS) {
+/*
+ * OK, so the subprocces failed to bind() the socket.  This 
may mean
+ * that another daemon was starting at the same time and 
succeeded
+ * with its bind().  So we'll try connecting again, but this 
time
+ * without spawning the daemon.
+ */
+spawnDaemon = false;
+goto retry;
+}
 
-if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
-virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
-goto error;
-}
+if (listen(passfd, 0) < 0) {
+virReportSystemError(errno, "%s",
+ _("Failed to listen on socket that's 
about "
+   "to be passed to the daemon

[libvirt] [PATCH v2] Fix connection to already running session libvirtd

2014-08-29 Thread Christophe Fergeau
Since 1b807f92, connecting with virsh to an already running session
libvirtd fails with:
$ virsh list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
connected

This is caused by a logic error in virNetSocketNewConnectUnix: even if
the connection to the daemon socket succeeded, we still try to spawn the
daemon and then connect to it.
This commit changes the logic to not try to spawn libvirtd if we
successfully connected to its socket.

Most of this commit is whitespace changes, use of -w is used to look at
it.
---
Same patch as the previous but with -w for easier review

Christophe


 src/rpc/virnetsocket.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 79258ef..9780e17 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -573,11 +573,12 @@ int virNetSocketNewConnectUNIX(const char *path,
 remoteAddr.data.un.sun_path[0] = '\0';
 
  retry:
-if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 && !spawnDaemon) {
+if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+if (!spawnDaemon) {
 virReportSystemError(errno, _("Failed to connect socket to '%s'"),
  path);
 goto error;
-} else if (spawnDaemon) {
+} else {
 int status = 0;
 pid_t pid = 0;
 
@@ -633,6 +634,7 @@ int virNetSocketNewConnectUNIX(const char *path,
 if (virNetSocketForkDaemon(binary, passfd) < 0)
 goto error;
 }
+}
 
 localAddr.len = sizeof(localAddr.data);
 if (getsockname(fd, &localAddr.data.sa, &localAddr.len) < 0) {
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] Fix connection to already running session libvirtd

2014-08-29 Thread Christophe Fergeau
Hey,

On Fri, Aug 29, 2014 at 11:08:53AM +0200, Martin Kletzander wrote:
> Although my git was a bit confused by the diff included in the commit
> message.  I'd suggest just saying that most of the commit is a
> whitespace change; people can see that using '-w' themselves.  That
> toggle should even work with format-patch, but I'm not sure that
> applies cleanly all the time.

Ah I did not know git format-patch has such as switch. This produces a
readable patch, but it's indeed not possible to apply it cleanly. I
could have sent the readable version in reply to the full version
though.

I've removed the diff from the commit log, but on second thought I
forgot to remove one 'always true' test so I'll send a v2.

Christophe


pgpDC_6x7XWcH.pgp
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] conf: Check migration_host is valid or not during libvirt restarts

2014-08-29 Thread Chen Fan
if user specified an invalid strings as migration hostname,
like setting: migration_host = "XXX", libvirt should check
it and return error during lbivirt restart.

Signed-off-by: Chen Fan 
---
 src/qemu/qemu_conf.c | 40 
 1 file changed, 40 insertions(+)

diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index e2ec54f..450ac5b 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "virerror.h"
 #include "qemu_conf.h"
@@ -650,6 +651,45 @@ int virQEMUDriverConfigLoadFile(virQEMUDriverConfigPtr cfg,
 GET_VALUE_LONG("seccomp_sandbox", cfg->seccompSandbox);
 
 GET_VALUE_STR("migration_host", cfg->migrateHost);
+if (cfg->migrateHost) {
+struct addrinfo hints;
+struct addrinfo *res;
+
+memset(&hints, 0, sizeof(hints));
+hints.ai_flags = AI_ADDRCONFIG;
+hints.ai_family = AF_UNSPEC;
+
+if (getaddrinfo(cfg->migrateHost, NULL, &hints, &res) != 0) {
+virReportError(VIR_ERR_CONF_SYNTAX,
+   _("migration_host: '%s' is not a valid hostname"),
+   cfg->migrateHost);
+goto cleanup;
+}
+
+if (res == NULL) {
+virReportError(VIR_ERR_CONF_SYNTAX,
+   _("No IP address for host '%s' found"),
+   cfg->migrateHost);
+goto cleanup;
+}
+
+freeaddrinfo(res);
+
+if (STRPREFIX(cfg->migrateHost, "localhost")) {
+virReportError(VIR_ERR_CONF_SYNTAX, "%s",
+   _("setting migration_host to 'localhost' is not 
allowed"));
+goto cleanup;
+}
+
+if (STREQ(cfg->migrateHost, "127.0.0.1") ||
+STREQ(cfg->migrateHost, "::1")) {
+virReportError(VIR_ERR_CONF_SYNTAX, "%s",
+   _("setting migration_host to '127.0.0.1' or '::1' "
+ "is not allowed"));
+goto cleanup;
+}
+}
+
 GET_VALUE_STR("migration_address", cfg->migrationAddress);
 
 GET_VALUE_BOOL("log_timestamp", cfg->logTimestamp);
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Entering freeze for libvirt-1.2.8

2014-08-29 Thread Daniel Veillard
On Wed, Aug 27, 2014 at 08:45:29PM +0200, Richard Weinberger wrote:
> On Wed, Aug 27, 2014 at 9:18 AM, Daniel Veillard  wrote:
> >   So I tagged 1.2.8-rc1 in git and made tarball and signed rpms
> 
> Can you please sign the tarball too?

  Okay, I went the simplest route of creating an asc for the tarball,
my key is on the mit server:

user: "Daniel Veillard (Red Hat work email) "
1024-bit DSA key, ID DE95BC1F, created 2000-05-31

  I also added asc for the latest 1.2.x releases along the tarballs,

Daniel

-- 
Daniel Veillard  | Open Source and Standards, Red Hat
veill...@redhat.com  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] Fix connection to already running session libvirtd

2014-08-29 Thread Martin Kletzander

On Fri, Aug 29, 2014 at 10:37:21AM +0200, Christophe Fergeau wrote:

Since 1b807f92, connecting with virsh to an already running session
libvirtd fails with:
$ virsh list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
connected

This is caused by a logic error in virNetSocketNewConnectUnix: even if
the connection to the daemon socket succeeded, we still try to spawn the
daemon and then connect to it.
This commit changes the logic to not try to spawn libvirtd if we
successfully connected to its socket.



Thanks for trying that, that was a flaw in my condition-optimization
mechanism, I guess.

Although my git was a bit confused by the diff included in the commit
message.  I'd suggest just saying that most of the commit is a
whitespace change; people can see that using '-w' themselves.  That
toggle should even work with format-patch, but I'm not sure that
applies cleanly all the time.

ACK with the commit cleaned up and safe for 1.2.8.

Thank you,
Martin


signature.asc
Description: Digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] Fix connection to already running session libvirtd

2014-08-29 Thread Christophe Fergeau
Since 1b807f92, connecting with virsh to an already running session
libvirtd fails with:
$ virsh list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
connected

This is caused by a logic error in virNetSocketNewConnectUnix: even if
the connection to the daemon socket succeeded, we still try to spawn the
daemon and then connect to it.
This commit changes the logic to not try to spawn libvirtd if we
successfully connected to its socket.

With whitespace changes removed, this patch becomes just this:

diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index f913365..79540b3 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -574,7 +574,8 @@ int virNetSocketNewConnectUNIX(const char *path,
 remoteAddr.data.un.sun_path[0] = '\0';

  retry:
-if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 &&
 !spawnDaemon) {
+if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+if (!spawnDaemon) {
 virReportSystemError(errno, _("Failed to connect socket to
'%s'"),
  path);
 goto error;
@@ -634,6 +635,7 @@ int virNetSocketNewConnectUNIX(const char *path,
 if (virNetSocketForkDaemon(binary, passfd) < 0)
 goto error;
 }
+}

 localAddr.len = sizeof(localAddr.data);
 if (getsockname(fd, &localAddr.data.sa, &localAddr.len) < 0) {
---
 src/rpc/virnetsocket.c | 102 +
 1 file changed, 52 insertions(+), 50 deletions(-)

diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 79258ef..8fc5d80 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -573,65 +573,67 @@ int virNetSocketNewConnectUNIX(const char *path,
 remoteAddr.data.un.sun_path[0] = '\0';
 
  retry:
-if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 && !spawnDaemon) {
-virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
-goto error;
-} else if (spawnDaemon) {
-int status = 0;
-pid_t pid = 0;
-
-if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
-virReportSystemError(errno, "%s", _("Failed to create socket"));
+if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+if (!spawnDaemon) {
+virReportSystemError(errno, _("Failed to connect socket to '%s'"),
+ path);
 goto error;
-}
+} else if (spawnDaemon) {
+int status = 0;
+pid_t pid = 0;
 
-/*
- * We have to fork() here, because umask() is set
- * per-process, chmod() is racy and fchmod() has undefined
- * behaviour on sockets according to POSIX, so it doesn't
- * work outside Linux.
- */
-if ((pid = virFork()) < 0)
-goto error;
+if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
+virReportSystemError(errno, "%s", _("Failed to create 
socket"));
+goto error;
+}
 
-if (pid == 0) {
-umask(0077);
-if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
-_exit(EXIT_FAILURE);
+/*
+ * We have to fork() here, because umask() is set
+ * per-process, chmod() is racy and fchmod() has undefined
+ * behaviour on sockets according to POSIX, so it doesn't
+ * work outside Linux.
+ */
+if ((pid = virFork()) < 0)
+goto error;
 
-_exit(EXIT_SUCCESS);
-}
+if (pid == 0) {
+umask(0077);
+if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
+_exit(EXIT_FAILURE);
 
-if (virProcessWait(pid, &status, false) < 0)
-goto error;
+_exit(EXIT_SUCCESS);
+}
 
-if (status != EXIT_SUCCESS) {
-/*
- * OK, so the subprocces failed to bind() the socket.  This may 
mean
- * that another daemon was starting at the same time and succeeded
- * with its bind().  So we'll try connecting again, but this time
- * without spawning the daemon.
- */
-spawnDaemon = false;
-goto retry;
-}
+if (virProcessWait(pid, &status, false) < 0)
+goto error;
 
-if (listen(passfd, 0) < 0) {
-virReportSystemError(errno, "%s",
- _("Failed to listen on socket that's about "
-   "to be passed to the daemon"));
-goto error;
-}
+if (status != EXIT_SUCCESS) {
+/*
+ * OK, so the subprocces failed t

[libvirt] [PATCH 1/2] qemu: implement block group for bulk stats.

2014-08-29 Thread Li Wei
This patch add the block group for bulk stats.
The following typed parameter used for each block stats:
block.count - number of block devices in this domain
block.0.name - name of the block device
block.0.rd_bytes - number of read bytes
block.0.rd_operations - number of read requests
block.0.rd_total_time -  total time spend on cache reads in nano-seconds
block.0.wr_bytes - number of write bytes
block.0.wr_operations - number of write requests
block.0.wr_total_time - total time spend on cache write in nano-seconds
block.0.flush_operations - total flush requests
block.0.flush_total_time - total time spend on cache flushing in 
nano-seconds

Signed-off-by: Li Wei 
---
 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  13 
 src/qemu/qemu_driver.c   |  31 
 src/qemu/qemu_monitor.c  |  23 ++
 src/qemu/qemu_monitor.h  |   5 ++
 src/qemu/qemu_monitor_json.c | 170 +++
 src/qemu/qemu_monitor_json.h |   5 ++
 7 files changed, 248 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 9358314..36c4fec 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1 << 0), /* return domain state */
+VIR_DOMAIN_STATS_BLOCK = (1 << 1), /* return block stats */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/libvirt.c b/src/libvirt.c
index 5d8f01c..ca0d071 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -21632,6 +21632,19 @@ virConnectGetAllDomainStats(virConnectPtr conn,
  * "state.reason" - reason for entering given state, returned as int from
  *  virDomain*Reason enum corresponding to given state.
  *
+ * VIR_DOMAIN_STATS_BLOCK: Return block device stats. The typed parameter keys
+ * are in this format:
+ * "block.count" - number of block devices in this domain
+ * "block.0.name" - name of the block device
+ * "block.0.rd_bytes" - number of read bytes
+ * "block.0.rd_operations" - number of read requests
+ * "block.0.rd_total_time" -  total time spend on cache reads in nano-seconds
+ * "block.0.wr_bytes" - number of write bytes
+ * "block.0.wr_operations" - number of write requests
+ * "block.0.wr_total_time" - total time spend on cache write in nano-seconds
+ * "block.0.flush_operations" - total flush requests
+ * "block.0.flush_total_time" - total time spend on cache flushing in 
nano-seconds
+ *
  * Using 0 for @stats returns all stats groups supported by the given
  * hypervisor.
  *
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 239a300..ef4d3be 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17221,6 +17221,36 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 }
 
 
+static int
+qemuDomainGetStatsBlock(virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int flags)
+{
+int ret;
+qemuDomainObjPrivatePtr priv = dom->privateData;
+
+/* only valid for active domain, ignore inactive ones silently */
+if (!virDomainObjIsActive(dom))
+return 0;
+
+if (qemuDomainObjBeginJob(qemu_driver, dom, QEMU_JOB_QUERY) < 0)
+return -1;
+
+qemuDomainObjEnterMonitor(qemu_driver, dom);
+ret = qemuMonitorDomainGetStatsBlock(priv->mon,
+ record,
+ maxparams,
+ flags);
+qemuDomainObjExitMonitor(qemu_driver, dom);
+
+if (qemuDomainObjEndJob(qemu_driver, dom) < 0)
+return -1;
+
+return ret;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
@@ -17234,6 +17264,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
+{ qemuDomainGetStatsBlock, VIR_DOMAIN_STATS_BLOCK},
 { NULL, 0 }
 };
 
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 5b2952a..83d1dc3 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -4071,3 +4071,26 @@ qemuMonitorRTCResetReinjection(qemuMonitorPtr mon)
 
 return qemuMonitorJSONRTCResetReinjection(mon);
 }
+
+int
+qemuMonitorDomainGetStatsBlock(qemuMonitorPtr mon,
+   virDomainStatsRecordPtr record,
+   int *maxparams,
+   unsigned int flags)
+{
+VIR_DEBUG("mon=%p", mon);
+
+if (!mon) {
+virReportError(VIR_ERR_INVALID_ARG, "%s",
+   _("monitor must not be NULL"));
+return -1;
+}
+
+if (!mon->json) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+   _("JSON monitor is required"));
+retu

[libvirt] [PATCH 2/2] virsh: add block group for bulk stats.

2014-08-29 Thread Li Wei
Add "--block" option to "domstats" command for querying block stats.

Signed-off-by: Li Wei 
---
 tools/virsh-domain-monitor.c | 7 +++
 tools/virsh.pod  | 4 ++--
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c
index 055d8d2..67efd61 100644
--- a/tools/virsh-domain-monitor.c
+++ b/tools/virsh-domain-monitor.c
@@ -1972,6 +1972,10 @@ static const vshCmdOptDef opts_domstats[] = {
  .type = VSH_OT_BOOL,
  .help = N_("report domain state"),
 },
+{.name = "block",
+ .type = VSH_OT_BOOL,
+ .help = N_("report block device stats"),
+},
 {.name = "list-active",
  .type = VSH_OT_BOOL,
  .help = N_("list only active domains"),
@@ -2063,6 +2067,9 @@ cmdDomstats(vshControl *ctl, const vshCmd *cmd)
 if (vshCommandOptBool(cmd, "state"))
 stats |= VIR_DOMAIN_STATS_STATE;
 
+if (vshCommandOptBool(cmd, "block"))
+stats |= VIR_DOMAIN_STATS_BLOCK;
+
 if (vshCommandOptBool(cmd, "list-active"))
 flags |= VIR_CONNECT_GET_ALL_DOMAINS_STATS_ACTIVE;
 
diff --git a/tools/virsh.pod b/tools/virsh.pod
index ea9267e..7d57f6b 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -813,7 +813,7 @@ that require a block device name (such as I or
 I for disk snapshots) will accept either target
 or unique source names printed by this command.
 
-=item B [I<--raw>] [I<--enforce>] [I<--state>]
+=item B [I<--raw>] [I<--enforce>] [I<--state>] [I<--block>]
 [[I<--list-active>] [I<--list-inactive>] [I<--list-persistent>]
 [I<--list-transient>] [I<--list-running>] [I<--list-paused>]
 [I<--list-shutoff>] [I<--list-other>]] | [I ...]
@@ -831,7 +831,7 @@ behavior use the I<--raw> flag.
 
 The individual statistics groups are selectable via specific flags. By
 default all supported statistics groups are returned. Supported
-statistics groups flags are: I<--state>.
+statistics groups flags are: I<--state>, I<--block>.
 
 Selecting a specific statistics groups doesn't guarantee that the
 daemon supports the selected group of stats. Flag I<--enforce>
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 0/2] bulk api: implement block group

2014-08-29 Thread Li Wei
This patchset implement the block group for bulk stats, currently only
support JSON monitor.

Li Wei (2):
  qemu: implement block group for bulk stats.
  virsh: add block group for bulk stats.

 include/libvirt/libvirt.h.in |   1 +
 src/libvirt.c|  13 
 src/qemu/qemu_driver.c   |  31 
 src/qemu/qemu_monitor.c  |  23 ++
 src/qemu/qemu_monitor.h  |   5 ++
 src/qemu/qemu_monitor_json.c | 170 +++
 src/qemu/qemu_monitor_json.h |   5 ++
 tools/virsh-domain-monitor.c |   7 ++
 tools/virsh.pod  |   4 +-
 9 files changed, 257 insertions(+), 2 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 11/11] qemu: bulk stats: implement blockinfo group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK_INFO
group of statistics.

This is different from the VIR_DOMAIN_STATS_BLOCK
group because represents the information about the
block device.
Most notably, this group export the allocation information
which is used by monitoring applications to detect
the space exaustion on the block devices.

Signed-off-by: Francesco Romani 
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 44 
 2 files changed, 45 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 372e098..c0b695d 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2516,6 +2516,7 @@ typedef enum {
 VIR_DOMAIN_STATS_VCPU = (1 << 3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1 << 4), /* return domain interfaces info */
 VIR_DOMAIN_STATS_BLOCK = (1 << 5), /* return domain block info */
+VIR_DOMAIN_STATS_BLOCK_INFO = (1 << 6), /* return domain block layout */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 344b02e..564f1e1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17604,6 +17604,49 @@ qemuDomainGetStatsBlock(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_BLOCK_PARAM
 
+#define QEMU_ADD_BLOCK_INFO_PARAM(RECORD, MAXPARAMS, BLOCK, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, "block.%s.%s", BLOCK, NAME); \
+if (virTypedParamsAddULLong(&RECORD->params, \
+&RECORD->nparams, \
+MAXPARAMS, \
+param_name, \
+VALUE) < 0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlockInfo(virConnectPtr conn,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn->privateData;
+virDomainBlockInfo info;
+size_t i;
+
+for (i = 0; i < dom->def->ndisks; i++) {
+memset(&info, 0, sizeof(info));
+
+if (qemuDiskGetBlockInfo(driver, dom, dom->def->disks[i],
+ dom->def->disks[i]->dst, &info) < 0)
+continue;
+
+QEMU_ADD_BLOCK_INFO_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "capacity", info.capacity);
+QEMU_ADD_BLOCK_INFO_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "allocation", info.allocation);
+QEMU_ADD_BLOCK_INFO_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "physical", info.physical);
+}
+
+return 0;
+}
+
+#undef QEMU_ADD_BLOCK_PARAM
+
 
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
@@ -17624,6 +17667,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
 { qemuDomainGetStatsBlock, VIR_DOMAIN_STATS_BLOCK },
+{ qemuDomainGetStatsBlockInfo, VIR_DOMAIN_STATS_BLOCK_INFO },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 09/11] qemu: bulk stats: implement interface group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_INTERFACE
group of statistics.

Signed-off-by: Francesco Romani 
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 53 
 2 files changed, 54 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 86ef18b..8c15583 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2514,6 +2514,7 @@ typedef enum {
 VIR_DOMAIN_STATS_CPU_TOTAL = (1 << 1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1 << 2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1 << 3), /* return domain virtual CPU info */
+VIR_DOMAIN_STATS_INTERFACE = (1 << 4), /* return domain interfaces info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 527a6b4..818fcbc 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17500,6 +17500,58 @@ qemuDomainGetStatsVcpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 }
 
 
+#define QEMU_ADD_NET_PARAM(RECORD, MAXPARAMS, IFNAME, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, "net.%s.%s", IFNAME, NAME); \
+if (virTypedParamsAddLLong(&RECORD->params, \
+   &RECORD->nparams, \
+   MAXPARAMS, \
+   param_name, \
+   VALUE) < 0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsInterface(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+struct _virDomainInterfaceStats tmp;
+
+/* Check the path is one of the domain's network interfaces. */
+for (i = 0; i < dom->def->nnets; i++) {
+memset(&tmp, 0, sizeof(tmp));
+
+if (virNetInterfaceStats(dom->def->nets[i]->ifname, &tmp) < 0)
+continue;
+
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "rx.bytes", tmp.rx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "rx.pkts", tmp.rx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "rx.errs", tmp.rx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "rx.drop", tmp.rx_drop);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "tx.bytes", tmp.tx_bytes);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "tx.pkts", tmp.tx_packets);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "tx.errs", tmp.tx_errs);
+QEMU_ADD_NET_PARAM(record, maxparams, dom->def->nets[i]->ifname,
+   "tx.drop", tmp.tx_drop);
+}
+
+return 0;
+}
+
+#undef QEMU_ADD_NET_PARAM
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17517,6 +17569,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
+{ qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 08/11] qemu: bulk stats: implement VCPU group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_VCPU
group of statistics.

Signed-off-by: Francesco Romani 
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 72 
 2 files changed, 73 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 78eb9b8..86ef18b 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2513,6 +2513,7 @@ typedef enum {
 VIR_DOMAIN_STATS_STATE = (1 << 0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1 << 1), /* return domain CPU info */
 VIR_DOMAIN_STATS_BALLOON = (1 << 2), /* return domain balloon info */
+VIR_DOMAIN_STATS_VCPU = (1 << 3), /* return domain virtual CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 9825f61..527a6b4 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17429,6 +17429,77 @@ qemuDomainGetStatsBalloon(virConnectPtr conn,
 return 0;
 }
 
+
+static int
+qemuDomainGetStatsVcpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+   virDomainObjPtr dom,
+   virDomainStatsRecordPtr record,
+   int *maxparams,
+   unsigned int privflags ATTRIBUTE_UNUSED)
+{
+size_t i;
+int ret = -1;
+char param_name[NAME_MAX];
+virVcpuInfoPtr cpuinfo = NULL;
+
+if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ "vcpu.current",
+ dom->def->vcpus) < 0)
+return -1;
+
+if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ "vcpu.maximum",
+ dom->def->maxvcpus) < 0)
+return -1;
+
+if (VIR_ALLOC_N(cpuinfo, dom->def->vcpus) < 0)
+return -1;
+
+if ((ret = qemuDomainHelperGetVcpus(dom,
+cpuinfo,
+dom->def->vcpus,
+NULL,
+0)) < 0)
+goto cleanup;
+
+for (i = 0; i < dom->def->vcpus; i++) {
+snprintf(param_name, NAME_MAX, "vcpu.%u.state", cpuinfo[i].number);
+if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].state) < 0)
+goto cleanup;
+
+snprintf(param_name, NAME_MAX, "vcpu.%u.time", cpuinfo[i].number);
+if (virTypedParamsAddULLong(&record->params,
+&record->nparams,
+maxparams,
+param_name,
+cpuinfo[i].cpuTime) < 0)
+goto cleanup;
+
+snprintf(param_name, NAME_MAX, "vcpu.%u.cpu", cpuinfo[i].number);
+if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ param_name,
+ cpuinfo[i].cpu) < 0)
+goto cleanup;
+}
+
+ret = 0;
+
+ cleanup:
+VIR_FREE(cpuinfo);
+return ret;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17445,6 +17516,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
+{ qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 07/11] qemu: bulk stats: implement balloon group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BALLOON
group of statistics.

Signed-off-by: Francesco Romani 
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 32 
 2 files changed, 33 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 992b124..78eb9b8 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2512,6 +2512,7 @@ struct _virDomainStatsRecord {
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1 << 0), /* return domain state */
 VIR_DOMAIN_STATS_CPU_TOTAL = (1 << 1), /* return domain CPU info */
+VIR_DOMAIN_STATS_BALLOON = (1 << 2), /* return domain balloon info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7ffd052..9825f61 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17397,6 +17397,37 @@ qemuDomainGetStatsCpu(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+static int
+qemuDomainGetStatsBalloon(virConnectPtr conn,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn->privateData;
+unsigned long cur_balloon = 0;
+int err = 0;
+
+err = qemuDomainGetBalloonMemory(driver, dom, &cur_balloon);
+if (err)
+return -1;
+
+if (virTypedParamsAddULLong(&record->params,
+&record->nparams,
+maxparams,
+"balloon.current",
+cur_balloon) < 0)
+return -1;
+
+if (virTypedParamsAddULLong(&record->params,
+&record->nparams,
+maxparams,
+"balloon.maximum",
+dom->def->mem.max_balloon) < 0)
+return -1;
+
+return 0;
+}
 
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
@@ -17413,6 +17444,7 @@ struct qemuDomainGetStatsWorker {
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
 { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
+{ qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 06/11] qemu: bulk stats: implement CPU stats group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL
group of statistics.

Signed-off-by: Francesco Romani 
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 56 
 2 files changed, 57 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 9358314..992b124 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2511,6 +2511,7 @@ struct _virDomainStatsRecord {
 
 typedef enum {
 VIR_DOMAIN_STATS_STATE = (1 << 0), /* return domain state */
+VIR_DOMAIN_STATS_CPU_TOTAL = (1 << 1), /* return domain CPU info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d4eda06..7ffd052 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -96,6 +96,7 @@
 #include "storage/storage_driver.h"
 #include "virhostdev.h"
 #include "domain_capabilities.h"
+#include "vircgroup.h"
 
 #define VIR_FROM_THIS VIR_FROM_QEMU
 
@@ -17343,6 +17344,60 @@ qemuDomainGetStatsState(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 return 0;
 }
 
+
+static int
+qemuDomainGetStatsCpu(virConnectPtr conn ATTRIBUTE_UNUSED,
+  virDomainObjPtr dom,
+  virDomainStatsRecordPtr record,
+  int *maxparams,
+  unsigned int privflags ATTRIBUTE_UNUSED)
+{
+qemuDomainObjPrivatePtr priv = dom->privateData;
+unsigned long long cpu_time = 0;
+unsigned long long user_time = 0;
+unsigned long long sys_time = 0;
+int ncpus = 0;
+
+ncpus = nodeGetCPUCount();
+
+if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ "cpu.count",
+ ncpus) < 0)
+return -1;
+
+if (virCgroupGetCpuacctUsage(priv->cgroup, &cpu_time) < 0)
+return -1;
+
+if (virCgroupGetCpuacctStat(priv->cgroup, &user_time, &sys_time) < 0)
+return -1;
+
+if (virTypedParamsAddULLong(&record->params,
+&record->nparams,
+maxparams,
+"cpu.time",
+cpu_time) < 0)
+return -1;
+
+if (virTypedParamsAddULLong(&record->params,
+&record->nparams,
+maxparams,
+"cpu.user",
+user_time) < 0)
+return -1;
+
+if (virTypedParamsAddULLong(&record->params,
+&record->nparams,
+maxparams,
+"cpu.system",
+sys_time) < 0)
+return -1;
+
+return 0;
+}
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17357,6 +17412,7 @@ struct qemuDomainGetStatsWorker {
 
 static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
+{ qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 10/11] qemu: bulk stats: implement block group

2014-08-29 Thread Francesco Romani
This patch implements the VIR_DOMAIN_STATS_BLOCK
group of statistics.

Signed-off-by: Francesco Romani 
---
 include/libvirt/libvirt.h.in |  1 +
 src/qemu/qemu_driver.c   | 54 
 2 files changed, 55 insertions(+)

diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 8c15583..372e098 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2515,6 +2515,7 @@ typedef enum {
 VIR_DOMAIN_STATS_BALLOON = (1 << 2), /* return domain balloon info */
 VIR_DOMAIN_STATS_VCPU = (1 << 3), /* return domain virtual CPU info */
 VIR_DOMAIN_STATS_INTERFACE = (1 << 4), /* return domain interfaces info */
+VIR_DOMAIN_STATS_BLOCK = (1 << 5), /* return domain block info */
 } virDomainStatsTypes;
 
 typedef enum {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 818fcbc..344b02e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17552,6 +17552,59 @@ qemuDomainGetStatsInterface(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 
 #undef QEMU_ADD_NET_PARAM
 
+#define QEMU_ADD_BLOCK_PARAM(RECORD, MAXPARAMS, BLOCK, NAME, VALUE) \
+do { \
+char param_name[NAME_MAX]; \
+snprintf(param_name, NAME_MAX, "block.%s.%s", BLOCK, NAME); \
+if (virTypedParamsAddLLong(&RECORD->params, \
+   &RECORD->nparams, \
+   MAXPARAMS, \
+   param_name, \
+   VALUE) < 0) \
+return -1; \
+} while (0)
+
+static int
+qemuDomainGetStatsBlock(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
+virDomainStatsRecordPtr record,
+int *maxparams,
+unsigned int privflags ATTRIBUTE_UNUSED)
+{
+virQEMUDriverPtr driver = conn->privateData;
+struct qemuBlockStats stats;
+size_t i;
+
+for (i = 0; i < dom->def->ndisks; i++) {
+memset(&stats, 0, sizeof(stats));
+
+if (qemuDiskGetBlockStats(driver, dom, dom->def->disks[i], &stats) < 0)
+continue;
+
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "rd.reqs", stats.rd_req);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "rd.bytes", stats.rd_bytes);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "rd.times", stats.rd_total_times);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "wr.reqs", stats.wr_req);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "wr.bytes", stats.wr_bytes);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "wr.times", stats.wr_total_times);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "fl.reqs", stats.flush_req);
+QEMU_ADD_BLOCK_PARAM(record, maxparams, dom->def->disks[i]->dst,
+ "fl.times", stats.flush_total_times);
+}
+
+return 0;
+}
+
+#undef QEMU_ADD_BLOCK_PARAM
+
+
 typedef int
 (*qemuDomainGetStatsFunc)(virConnectPtr conn,
   virDomainObjPtr dom,
@@ -17570,6 +17623,7 @@ static struct qemuDomainGetStatsWorker 
qemuDomainGetStatsWorkers[] = {
 { qemuDomainGetStatsBalloon, VIR_DOMAIN_STATS_BALLOON },
 { qemuDomainGetStatsVcpu, VIR_DOMAIN_STATS_VCPU },
 { qemuDomainGetStatsInterface, VIR_DOMAIN_STATS_INTERFACE },
+{ qemuDomainGetStatsBlock, VIR_DOMAIN_STATS_BLOCK },
 { NULL, 0 }
 };
 
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 03/11] qemu: add helper to get the block stats

2014-08-29 Thread Francesco Romani
Add an helper function to get the block stats
of a disk.
This helper is meant to be used by the bulk stats API;
future patches may want to refactor qemuDomainGetBlock*
to make use of this function as well.

Signed-off-by: Francesco Romani 
---
 src/qemu/qemu_driver.c | 59 ++
 1 file changed, 59 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 1842e60..e7dd5ed 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -136,6 +136,18 @@ VIR_LOG_INIT("qemu.qemu_driver");
 
 #define QEMU_NB_BANDWIDTH_PARAM 6
 
+struct qemuBlockStats {
+long long rd_req;
+long long rd_bytes;
+long long wr_req;
+long long wr_bytes;
+long long rd_total_times;
+long long wr_total_times;
+long long flush_req;
+long long flush_total_times;
+long long errs; /* meaning less for QEMU */
+};
+
 static void processWatchdogEvent(virQEMUDriverPtr driver,
  virDomainObjPtr vm,
  int action);
@@ -178,6 +190,12 @@ static int qemuDomainHelperGetVcpus(virDomainObjPtr vm,
 unsigned char *cpumaps,
 int maplen);
 
+static int qemuDiskGetBlockStats(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ virDomainDiskDefPtr disk,
+ struct qemuBlockStats *stats);
+
+
 virQEMUDriverPtr qemu_driver = NULL;
 
 
@@ -9672,6 +9690,47 @@ qemuDomainBlockStats(virDomainPtr dom,
 return ret;
 }
 
+
+
+static int
+qemuDiskGetBlockStats(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  virDomainDiskDefPtr disk,
+  struct qemuBlockStats *stats)
+{
+int ret = -1;
+qemuDomainObjPrivatePtr priv;
+
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
+goto cleanup;
+
+priv = vm->privateData;
+
+qemuDomainObjEnterMonitor(driver, vm);
+
+ret = qemuMonitorGetBlockStatsInfo(priv->mon,
+   disk->info.alias,
+   &stats->rd_req,
+   &stats->rd_bytes,
+   &stats->rd_total_times,
+   &stats->wr_req,
+   &stats->wr_bytes,
+   &stats->wr_total_times,
+   &stats->flush_req,
+   &stats->flush_total_times,
+   &stats->errs);
+
+qemuDomainObjExitMonitor(driver, vm);
+
+if (!qemuDomainObjEndJob(driver, vm))
+vm = NULL;
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
 static int
 qemuDomainBlockStatsFlags(virDomainPtr dom,
   const char *path,
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 05/11] qemu: bulk stats: pass connection to workers

2014-08-29 Thread Francesco Romani
Future patches which will implement more
bulk stats groups for QEMU will need to access
the connection object, so enrich the worker
prototype.

Signed-off-by: Francesco Romani 
---
 src/qemu/qemu_driver.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ee0a576..d4eda06 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17320,7 +17320,8 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
 
 
 static int
-qemuDomainGetStatsState(virDomainObjPtr dom,
+qemuDomainGetStatsState(virConnectPtr conn ATTRIBUTE_UNUSED,
+virDomainObjPtr dom,
 virDomainStatsRecordPtr record,
 int *maxparams,
 unsigned int privflags ATTRIBUTE_UNUSED)
@@ -17342,9 +17343,9 @@ qemuDomainGetStatsState(virDomainObjPtr dom,
 return 0;
 }
 
-
 typedef int
-(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+(*qemuDomainGetStatsFunc)(virConnectPtr conn,
+  virDomainObjPtr dom,
   virDomainStatsRecordPtr record,
   int *maxparams,
   unsigned int flags);
@@ -17405,7 +17406,7 @@ qemuDomainGetStats(virConnectPtr conn,
 
 for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
 if (stats & qemuDomainGetStatsWorkers[i].stats) {
-if (qemuDomainGetStatsWorkers[i].func(dom, tmp, &maxparams,
+if (qemuDomainGetStatsWorkers[i].func(conn, dom, tmp, &maxparams,
   flags) < 0)
 goto cleanup;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 04/11] qemu: extract helper to get block info

2014-08-29 Thread Francesco Romani
Extract qemuDiskGetBlockInfo helper.
This way, the very same code will be used both
by existing qemuDomainGetBlockInfo API and by
the new bulk stats API.

Signed-off-by: Francesco Romani 
---
 src/qemu/qemu_driver.c | 54 ++
 1 file changed, 37 insertions(+), 17 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e7dd5ed..ee0a576 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -195,6 +195,12 @@ static int qemuDiskGetBlockStats(virQEMUDriverPtr driver,
  virDomainDiskDefPtr disk,
  struct qemuBlockStats *stats);
 
+static int qemuDiskGetBlockInfo(virQEMUDriverPtr driver,
+virDomainObjPtr vm,
+virDomainDiskDefPtr disk,
+const char *path,
+virDomainBlockInfoPtr info);
+
 
 virQEMUDriverPtr qemu_driver = NULL;
 
@@ -10451,29 +10457,16 @@ qemuDomainGetBlockInfo(virDomainPtr dom,
virDomainBlockInfoPtr info,
unsigned int flags)
 {
-virQEMUDriverPtr driver = dom->conn->privateData;
-virDomainObjPtr vm;
-int ret = -1;
-int fd = -1;
-off_t end;
-virStorageSourcePtr meta = NULL;
-virDomainDiskDefPtr disk = NULL;
-struct stat sb;
 int idx;
-int format;
-int activeFail = false;
-virQEMUDriverConfigPtr cfg = NULL;
-char *alias = NULL;
-char *buf = NULL;
-ssize_t len;
+int ret = -1;
+virQEMUDriverPtr driver = dom->conn->privateData;
+virDomainObjPtr vm = NULL;
 
 virCheckFlags(0, -1);
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 return -1;
 
-cfg = virQEMUDriverGetConfig(driver);
-
 if (virDomainGetBlockInfoEnsureACL(dom->conn, vm->def) < 0)
 goto cleanup;
 
@@ -10489,7 +10482,34 @@ qemuDomainGetBlockInfo(virDomainPtr dom,
 goto cleanup;
 }
 
-disk = vm->def->disks[idx];
+ret = qemuDiskGetBlockInfo(driver, vm, vm->def->disks[idx], path, info);
+
+ cleanup:
+virObjectUnlock(vm);
+return ret;
+}
+
+
+static int
+qemuDiskGetBlockInfo(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ virDomainDiskDefPtr disk,
+ const char *path,
+ virDomainBlockInfoPtr info)
+{
+int ret = -1;
+int fd = -1;
+off_t end;
+virQEMUDriverConfigPtr cfg = NULL;
+virStorageSourcePtr meta = NULL;
+struct stat sb;
+int format;
+int activeFail = false;
+char *alias = NULL;
+char *buf = NULL;
+ssize_t len;
+
+cfg = virQEMUDriverGetConfig(driver);
 
 if (virStorageSourceIsLocalStorage(disk->src)) {
 if (!disk->src->path) {
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 01/11] qemu: extract helper to get the current balloon

2014-08-29 Thread Francesco Romani
Refactor the code to extract an helper method
to get the current balloon settings.

Signed-off-by: Francesco Romani 
---
 src/qemu/qemu_driver.c | 98 ++
 1 file changed, 60 insertions(+), 38 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 239a300..bbd16ed 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -168,6 +168,9 @@ static int qemuOpenFileAs(uid_t fallback_uid, gid_t 
fallback_gid,
   const char *path, int oflags,
   bool *needUnlink, bool *bypassSecurityDriver);
 
+static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  unsigned long *memory);
 
 virQEMUDriverPtr qemu_driver = NULL;
 
@@ -2519,6 +2522,60 @@ static int qemuDomainSendKey(virDomainPtr domain,
 return ret;
 }
 
+static int qemuDomainGetBalloonMemory(virQEMUDriverPtr driver,
+  virDomainObjPtr vm,
+  unsigned long *memory)
+{
+int ret = -1;
+int err = 0;
+qemuDomainObjPrivatePtr priv = vm->privateData;
+
+if ((vm->def->memballoon != NULL) &&
+(vm->def->memballoon->model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
+*memory = vm->def->mem.max_balloon;
+} else if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
+*memory = vm->def->mem.cur_balloon;
+} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
+unsigned long long balloon;
+
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
+goto cleanup;
+if (!virDomainObjIsActive(vm))
+err = 0;
+else {
+qemuDomainObjEnterMonitor(driver, vm);
+err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
+qemuDomainObjExitMonitor(driver, vm);
+}
+if (!qemuDomainObjEndJob(driver, vm)) {
+vm = NULL;
+goto cleanup;
+}
+
+if (err < 0) {
+/* We couldn't get current memory allocation but that's not
+ * a show stopper; we wouldn't get it if there was a job
+ * active either
+ */
+*memory = vm->def->mem.cur_balloon;
+} else if (err == 0) {
+/* Balloon not supported, so maxmem is always the allocation */
+*memory = vm->def->mem.max_balloon;
+} else {
+*memory = balloon;
+}
+} else {
+*memory = vm->def->mem.cur_balloon;
+}
+
+ret = 0;
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
 static int qemuDomainGetInfo(virDomainPtr dom,
  virDomainInfoPtr info)
 {
@@ -2526,7 +2583,6 @@ static int qemuDomainGetInfo(virDomainPtr dom,
 virDomainObjPtr vm;
 int ret = -1;
 int err;
-unsigned long long balloon;
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 goto cleanup;
@@ -2549,43 +2605,9 @@ static int qemuDomainGetInfo(virDomainPtr dom,
 info->maxMem = vm->def->mem.max_balloon;
 
 if (virDomainObjIsActive(vm)) {
-qemuDomainObjPrivatePtr priv = vm->privateData;
-
-if ((vm->def->memballoon != NULL) &&
-(vm->def->memballoon->model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
-info->memory = vm->def->mem.max_balloon;
-} else if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BALLOON_EVENT)) {
-info->memory = vm->def->mem.cur_balloon;
-} else if (qemuDomainJobAllowed(priv, QEMU_JOB_QUERY)) {
-if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
-goto cleanup;
-if (!virDomainObjIsActive(vm))
-err = 0;
-else {
-qemuDomainObjEnterMonitor(driver, vm);
-err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
-qemuDomainObjExitMonitor(driver, vm);
-}
-if (!qemuDomainObjEndJob(driver, vm)) {
-vm = NULL;
-goto cleanup;
-}
-
-if (err < 0) {
-/* We couldn't get current memory allocation but that's not
- * a show stopper; we wouldn't get it if there was a job
- * active either
- */
-info->memory = vm->def->mem.cur_balloon;
-} else if (err == 0) {
-/* Balloon not supported, so maxmem is always the allocation */
-info->memory = vm->def->mem.max_balloon;
-} else {
-info->memory = balloon;
-}
-} else {
-info->memory = vm->def->mem.cur_balloon;
-}
+err = qemuDomainGetBalloonMemory(driver, vm, &info->memory);
+if (err)
+return err;
 } else {
 info->memory = 0;
 }
-- 
1.9.3

--
libvir-list mailing list
libvir

[libvirt] [PATCH 00/11] bulk stats: QEMU implementation

2014-08-29 Thread Francesco Romani
This patchset enhances the QEMU support for the new bulk stats API.
What is added is the equivalent of these APIs:

virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus

This subset of API is the one oVirt relies on.

The patchset is organized as follows:
- the first 4 patches do refactoring to extract internal helper
  functions to be used by the old API and by the new bulk one.
  For block stats on helper is actually added instead of extracted.
- since some groups require access to the QEMU monitor, one patch
  extend the internal interface to easily accomodate that
- finally, the last six patches implement the support for the
  bulk API.

Francesco Romani (11):
  qemu: extract helper to get the current balloon
  qemu: extract helper to gather vcpu data
  qemu: add helper to get the block stats
  qemu: extract helper to get block info
  qemu: bulk stats: pass connection to workers
  qemu: bulk stats: implement CPU stats group
  qemu: bulk stats: implement balloon group
  qemu: bulk stats: implement VCPU group
  qemu: bulk stats: implement interface group
  qemu: bulk stats: implement block group
  qemu: bulk stats: implement blockinfo group

 include/libvirt/libvirt.h.in |   6 +
 src/qemu/qemu_driver.c   | 558 ++-
 2 files changed, 502 insertions(+), 62 deletions(-)

-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 02/11] qemu: extract helper to gather vcpu data

2014-08-29 Thread Francesco Romani
Extracts an helper to gether the VCpu
information.

Signed-off-by: Francesco Romani 
---
 src/qemu/qemu_driver.c | 29 +
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index bbd16ed..1842e60 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -172,6 +172,12 @@ static int qemuDomainGetBalloonMemory(virQEMUDriverPtr 
driver,
   virDomainObjPtr vm,
   unsigned long *memory);
 
+static int qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+virVcpuInfoPtr info,
+int maxinfo,
+unsigned char *cpumaps,
+int maplen);
+
 virQEMUDriverPtr qemu_driver = NULL;
 
 
@@ -4974,10 +4980,7 @@ qemuDomainGetVcpus(virDomainPtr dom,
int maplen)
 {
 virDomainObjPtr vm;
-size_t i;
-int v, maxcpu, hostcpus;
 int ret = -1;
-qemuDomainObjPrivatePtr priv;
 
 if (!(vm = qemuDomObjFromDomain(dom)))
 goto cleanup;
@@ -4992,7 +4995,25 @@ qemuDomainGetVcpus(virDomainPtr dom,
 goto cleanup;
 }
 
-priv = vm->privateData;
+ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen);
+
+ cleanup:
+if (vm)
+virObjectUnlock(vm);
+return ret;
+}
+
+static int
+qemuDomainHelperGetVcpus(virDomainObjPtr vm,
+ virVcpuInfoPtr info,
+ int maxinfo,
+ unsigned char *cpumaps,
+ int maplen)
+{
+int ret = -1;
+int v, maxcpu, hostcpus;
+size_t i;
+qemuDomainObjPrivatePtr priv = vm->privateData;
 
 if ((hostcpus = nodeGetCPUCount()) < 0)
 goto cleanup;
-- 
1.9.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [python PATCH 2/5] API: Skip 'virDomainStatsRecordListFree'

2014-08-29 Thread Pavel Hrdina
On 08/28/2014 06:32 PM, Peter Krempa wrote:
> The new API function doesn't make sense to be exported in python. The
> bindings will return native types instead of the struct array.
> ---
>  generator.py | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/generator.py b/generator.py
> index 9c497be..cfc016e 100755
> --- a/generator.py
> +++ b/generator.py
> @@ -571,6 +571,7 @@ skip_function = (
>  "virTypedParamsGetULLong",
> 
>  'virNetworkDHCPLeaseFree', # only useful in C, python code uses list
> +'virDomainStatsRecordListFree', # only useful in C, python uses dict
>  )
> 
>  lxc_skip_function = (
> 


You also have to add this hunk to make the sanitytest happy:

diff --git a/sanitytest.py b/sanitytest.py
index 4f4a648..10cf9f0 100644
--- a/sanitytest.py
+++ b/sanitytest.py
@@ -81,6 +81,9 @@ for cname in wantfunctions:
 if name[0:23] == "virNetworkDHCPLeaseFree":
 continue

+if name[0:28] == "virDomainStatsRecordListFree":
+continue
+
 # These aren't functions, they're callback signatures
 if name in ["virConnectAuthCallbackPtr", "virConnectCloseFunc",
 "virStreamSinkFunc", "virStreamSourceFunc",
"virStreamEventCallback",

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [python PATCH 1/5] generator: enum: Don't sort enums by names

2014-08-29 Thread Pavel Hrdina
On 08/28/2014 06:38 PM, Eric Blake wrote:
> On 08/28/2014 10:32 AM, Peter Krempa wrote:
>> Setting OCDs aside, sorting enums breaks if the definition contains
>> links to other enums defined in the libvirt header before. Let the
>> generator generate it in the natural order.
>> ---
>>  generator.py | 2 --
>>  1 file changed, 2 deletions(-)
> 
> ACK. But I _also_ think we need to fix libvirt to recursively resolve
> enums defined by reference to other names, so that the API xml file it
> generates is fully numeric rather than making clients chase down
> symbolic resolutions themselves.
> 

This doesn't fix the issue as it's generated in random order now and
sometimes it generate the enums' definition in wrong order so it cannot
be resolved. I agree that we should remove that references.

NACK

Pavel

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list