Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-29 Thread Daniel P. Berrange
FYI,

the data streams patches are now committed

Daniel

On Fri, Sep 25, 2009 at 09:58:51AM +0200, Daniel Veillard wrote:
 On Mon, Aug 24, 2009 at 09:51:03PM +0100, Daniel P. Berrange wrote:
  The following series of patches introduce support for generic data
  streams in the libvirt API, the remote protocol, client  daemon.
  
  The public API takes the form of a new object virStreamPtr and
  methods to read/write/close it
  
  The remote protocol was the main hard bit. Since the protocol
  allows multiple concurrent API calls on a single connection,
  this needed  to also allow concurrent data streams. It is also
  not acceptable for a large data stream to block other traffic
  while it is transferring.
  
  Thus, we introduce a new protocol message type 'REMOTE_STREAM'
  to handle transfer for the stream data. A method involving a
  data streams starts off in the normal way with a REMOTE_CALL
  to the server, and a REMOTE_REPLY  response message. If this
  was successful, there now follows the data stream traffic.
  
  For outgoing streams (data from client to server), the client
  will send zero or more REMOTE_STREAM packets containing the
  data with status == REMOTE_CONTINUE. These are asynchronous
  and not acknowledged by the server. At any time the server
  may send an async message with a type of REMOTE_STREAM and
  status of REMOTE_ERROR. This indicates to the client that the
  transfer is aborting at server request. If the client wishes
  to abort, it can send the server a REMOTE_STREAM+REMOTE_ERROR
  message. If the client finishes its data transfer, it will
  send a final REMOTE_STREAM+REMOTE_OK message, and the server
  will respond with the same. This full roundtrip handshake
  ensures any async error messages are guarenteed to be flushed
  
  For incoming data streams (data from server to client), the
  server sends zero or more REMOTE_STREAM packets containing the
  data with status == REMOTE_CONTINUE. These are asynchronous
  and not acknowledged by the client. At any time the client
  may send an async message with a type of REMOTE_STREAM and
  status of REMOTE_ERROR. This indicates to the server that the 
  transfer is aborting at client request. If the server wishes
  to abort, it can send the server a REMOTE_STREAM+REMOTE_ERROR
  message. When the server finishes its data transfer, it will
  send a final REMOTE_STREAM+REMOTE_CONTINUE message ewith a 
  data length of zero (ie EOF). The client will then send a 
  REMOTE_STREAM+REMOTE_OK packet and the server will respond
  with the same. This full roundtrip handshake ensures any async
  error messages are guarenteed to be flushed
  
  This all ensures that multiple data streams can be active in
  parallel, and with a maximum data packet size of 256 KB, no
  single stream can cause too much latency on the connection for
  other API calls/streams.
 
   Okay, this is very similar in principle with HTTP pipelining
 with IMHO the same benefits and the same potential drawbacks.
 A couple of things to check might be:
- the maximum amount of concurrent active streams allowed,
  for example suppose you want to migrate in emergency
  all the domains out of a failing machine, some level of
  serialization may be better than say attempting to migrate
  all 100 domains at the same time. 10 parallel stream might
  be better, but we need to make sure the API allows to report
  such condition.
- the maximum chunking size, but with 256k I think this is
  covered.
- synchronization internally between threads to avoid deadlocks
  or poor performances, that can be very hard to debug, so I
  guess an effort should be provided to explain how things are
  designed internally.
 
   But this sounds fine in general.
 
  The only thing it does not allow for is one API method to use
  two or more streams. These may be famous last words, but I
  don't think that use case will be neccessary for any of our
  APIs...
 
   as long as the limitation is documented especially in the parts
 of teh code where the assumption is made, sounds fine.
 
  The last 5 patches with a subject of [DEMO] are *NOT* intended
  to be committed to the repository. They merely demonstrate the
  use of data streams for a couple of hypothetical file upload
  and download APIs. Actually they were mostly to allow me to
  test the code streams code without messing around with the QEMU
  migration code.
  
  The immediate use case for this data stream code is Chris' QEMU
  migration patchset.
  
  The next use case is to allow serial console access to be tunnelled
  over libvirtd, eg to make  'virsh console GUEST' work remotely.
  This use case is why I included the support for non-blocking data
  streams and event loop integration (not required for Chris'
  migration use case)
 
   Okay, next to individual patches reviews,
 
 Daniel
 
 -- 
 Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
 dan...@veillard.com  | 

Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-25 Thread Daniel Veillard
On Mon, Aug 24, 2009 at 09:51:03PM +0100, Daniel P. Berrange wrote:
 The following series of patches introduce support for generic data
 streams in the libvirt API, the remote protocol, client  daemon.
 
 The public API takes the form of a new object virStreamPtr and
 methods to read/write/close it
 
 The remote protocol was the main hard bit. Since the protocol
 allows multiple concurrent API calls on a single connection,
 this needed  to also allow concurrent data streams. It is also
 not acceptable for a large data stream to block other traffic
 while it is transferring.
 
 Thus, we introduce a new protocol message type 'REMOTE_STREAM'
 to handle transfer for the stream data. A method involving a
 data streams starts off in the normal way with a REMOTE_CALL
 to the server, and a REMOTE_REPLY  response message. If this
 was successful, there now follows the data stream traffic.
 
 For outgoing streams (data from client to server), the client
 will send zero or more REMOTE_STREAM packets containing the
 data with status == REMOTE_CONTINUE. These are asynchronous
 and not acknowledged by the server. At any time the server
 may send an async message with a type of REMOTE_STREAM and
 status of REMOTE_ERROR. This indicates to the client that the
 transfer is aborting at server request. If the client wishes
 to abort, it can send the server a REMOTE_STREAM+REMOTE_ERROR
 message. If the client finishes its data transfer, it will
 send a final REMOTE_STREAM+REMOTE_OK message, and the server
 will respond with the same. This full roundtrip handshake
 ensures any async error messages are guarenteed to be flushed
 
 For incoming data streams (data from server to client), the
 server sends zero or more REMOTE_STREAM packets containing the
 data with status == REMOTE_CONTINUE. These are asynchronous
 and not acknowledged by the client. At any time the client
 may send an async message with a type of REMOTE_STREAM and
 status of REMOTE_ERROR. This indicates to the server that the 
 transfer is aborting at client request. If the server wishes
 to abort, it can send the server a REMOTE_STREAM+REMOTE_ERROR
 message. When the server finishes its data transfer, it will
 send a final REMOTE_STREAM+REMOTE_CONTINUE message ewith a 
 data length of zero (ie EOF). The client will then send a 
 REMOTE_STREAM+REMOTE_OK packet and the server will respond
 with the same. This full roundtrip handshake ensures any async
 error messages are guarenteed to be flushed
 
 This all ensures that multiple data streams can be active in
 parallel, and with a maximum data packet size of 256 KB, no
 single stream can cause too much latency on the connection for
 other API calls/streams.

  Okay, this is very similar in principle with HTTP pipelining
with IMHO the same benefits and the same potential drawbacks.
A couple of things to check might be:
   - the maximum amount of concurrent active streams allowed,
 for example suppose you want to migrate in emergency
 all the domains out of a failing machine, some level of
 serialization may be better than say attempting to migrate
 all 100 domains at the same time. 10 parallel stream might
 be better, but we need to make sure the API allows to report
 such condition.
   - the maximum chunking size, but with 256k I think this is
 covered.
   - synchronization internally between threads to avoid deadlocks
 or poor performances, that can be very hard to debug, so I
 guess an effort should be provided to explain how things are
 designed internally.

  But this sounds fine in general.

 The only thing it does not allow for is one API method to use
 two or more streams. These may be famous last words, but I
 don't think that use case will be neccessary for any of our
 APIs...

  as long as the limitation is documented especially in the parts
of teh code where the assumption is made, sounds fine.

 The last 5 patches with a subject of [DEMO] are *NOT* intended
 to be committed to the repository. They merely demonstrate the
 use of data streams for a couple of hypothetical file upload
 and download APIs. Actually they were mostly to allow me to
 test the code streams code without messing around with the QEMU
 migration code.
 
 The immediate use case for this data stream code is Chris' QEMU
 migration patchset.
 
 The next use case is to allow serial console access to be tunnelled
 over libvirtd, eg to make  'virsh console GUEST' work remotely.
 This use case is why I included the support for non-blocking data
 streams and event loop integration (not required for Chris'
 migration use case)

  Okay, next to individual patches reviews,

Daniel

-- 
Daniel Veillard  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
dan...@veillard.com  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-25 Thread Daniel P. Berrange
On Fri, Sep 25, 2009 at 09:58:51AM +0200, Daniel Veillard wrote:
 On Mon, Aug 24, 2009 at 09:51:03PM +0100, Daniel P. Berrange wrote:
 
   Okay, this is very similar in principle with HTTP pipelining
 with IMHO the same benefits and the same potential drawbacks.
 A couple of things to check might be:
- the maximum amount of concurrent active streams allowed,
  for example suppose you want to migrate in emergency
  all the domains out of a failing machine, some level of
  serialization may be better than say attempting to migrate
  all 100 domains at the same time. 10 parallel stream might
  be better, but we need to make sure the API allows to report
  such condition.

We could certainly add a tunable in /etc/libvirt/libvirtd.conf
that limits the number of streams that are allowed per client.

- the maximum chunking size, but with 256k I think this is
  covered.

Yes, the remote protocol itself limits each message to 256k
currently. I think this is a good enough size, since it avoids
the stream delaying RPC calls, and the encryption chunk size
is going to be smaller than this anyway, so you won't gain much
from larger chunks.

- synchronization internally between threads to avoid deadlocks
  or poor performances, that can be very hard to debug, so I
  guess an effort should be provided to explain how things are
  designed internally.

Each individual virStreamPtr object is directly associated with
a single API call, so in essence each virStreamPtr should only
really be used from a single thread. That said, the virStreamPtr
internal drivers should all lock the virStreamPtr object as
they require to provide safety.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-21 Thread Chris Lalancette
Daniel P. Berrange wrote:
 1)  Immediately after starting the stream, I get a virStreamRecv() callback 
 on
 the destination side.  The problem is that this is wrong for migration; 
 there's
 no data that I can read *from* the destination qemu process which makes any
 sense.  While I could implement the method and just throw away the data, that
 doesn't seem right to me.  This leads to...
 
 I realize this is due to the remoteAddClientStream() method in
 qemud/stream.c. It unconditionally sets 'stream-tx' to 1. I
 didn't notice the problem myself, since the test driver is using
 pipes which are unidirectional, but yor UNIX domain socket is
 bi-directional.
 
 We could either add a flag to remoteAddClientStream() to indicate
 whether the stream should allow read or write or both. Or you
 might be able to call   shutdown(sockfd, SHUT_RD) on your UNIX
 socket to indicate that its only going to be used for write
 effectively making it unidirectional.

I tried the shutdown(sockfd, SHUT_RD) method, without success.  Then I commented
out the stream-tx = 1 line as a test, and the migration (mostly) worked.  At
least, it transferred the data to the other side, at which point trying to
virsh console on the destination side caused a libvirtd segfault again.  So
your idea is correct, although I think we still have a problem with the cleanup
of the stream.  I'm still debugging that.

-- 
Chris Lalancette

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-17 Thread Chris Lalancette
Daniel P. Berrange wrote:
 I still see a safewrite() in the your virStreamWrite() impl in the
 code currently pushed to gitorious.or, but perhaps you've changed
 that locally already. The other thing is that if the stream  open

Yeah, sorry, I just never pushed it up to gitorious.  I'll make the changes
along with the virSetNonBlock() and push it up there, probably tomorrow.

 flags included VIR_STREAM_NONBLOCK, you must make sur eyou put your
 socket in non-blocking mode, eg
 
 if ((st-flags  VIR_STREAM_NONBLOCK) 
 virSetNonBlock(create ? fds[1] : fds[0])  0) {
 virReportSystemError(st-conn, errno, %s,
  _(cannot make stream non-blocking));
 goto error;
 }
 
 in your stream open method. That shouldn't have caused a crash though - it
 would merely make libvirtd non-responsive for a while it QEMU blocked
 the incoming migration socket.
 
 
 All in all though the code looks reasonable and I don't see any obvious
 problems. I'll have to try running it to see if any crash appears

Thanks.

-- 
Chris Lalancette

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-17 Thread Daniel P. Berrange
On Tue, Sep 15, 2009 at 02:35:02PM +0200, Chris Lalancette wrote:
 Daniel P. Berrange wrote:
  The immediate use case for this data stream code is Chris' QEMU
  migration patchset.
  
  The next use case is to allow serial console access to be tunnelled
  over libvirtd, eg to make  'virsh console GUEST' work remotely.
  This use case is why I included the support for non-blocking data
  streams and event loop integration (not required for Chris'
  migration use case)
  
  Anyway, assuming Chris confirms that I've not broken his code, then
  patches 1-6 are targetted for this next release.
 
 I'm sorry for the very long delay in getting back to this.  I've been playing
 around with my tunnelled migration patches on top of this code, and I just 
 can't
 seem to make the new nonblocking stuff work properly.  I'm getting a couple of
 behaviors that are highly undesirable:
 
 1)  Immediately after starting the stream, I get a virStreamRecv() callback on
 the destination side.  The problem is that this is wrong for migration; 
 there's
 no data that I can read *from* the destination qemu process which makes any
 sense.  While I could implement the method and just throw away the data, that
 doesn't seem right to me.  This leads to...

I realize this is due to the remoteAddClientStream() method in
qemud/stream.c. It unconditionally sets 'stream-tx' to 1. I
didn't notice the problem myself, since the test driver is using
pipes which are unidirectional, but yor UNIX domain socket is
bi-directional.

We could either add a flag to remoteAddClientStream() to indicate
whether the stream should allow read or write or both. Or you
might be able to call   shutdown(sockfd, SHUT_RD) on your UNIX
socket to indicate that its only going to be used for write
effectively making it unidirectional.

A 3rd option is to define more flags for virStreamNew(), one for
READ, one for WRITE, and have the remote daemon pay attention to
these

 2)  A crash in libvirtd on the source side of the destination.  It doesn't
 happen every single time, but when it has happened I've traced it down to the
 fact that src/remote_internal.c:remoteDomainEventFired() can get called 
 *after*
 conn-privateData has been set to NULL, leading to a SEGV on NULL pointer
 dereference.  I can provide a core-dump for this if needed.

I don't have any explanation for this - its a little wierd and we ought
to try and figure it out if possible.

 3) (minor) The python bindings refuse to build with these patches in place.

Yeah I completely forgot to add rules for virSecret APIs there

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-17 Thread Daniel P. Berrange
In looking at your migration patches I realized we could tweak
things a little bit to allow the implementation of a new style
migration API which does not require the destination virConnectPtr
object. More importantly this could be used independantly of
the tunnelled migration. So the patch that follows takes the
public API bits of your migration code and adds a new flag
VIR_MIGRATE_PEER2PEER, and virDomainMigrateToURI method. It
implements it for the Xen driver. 

For the QEMU driver you already have code which copes with the
combination of VIR_MIGRATE_PEER2PEER + VIR_MIGRATE_TUNNELLED.
We could easily adapt that to also cope with doing a migration
using VIR_MIGRATE_PEER2PEER on its own. ie, source libvirtd
opens connection to destination libvirtd, runs the existing
prepare method, then uses QEMU monitor todo a plain TCP migration
and then invokes the existing finish method. 

This patch applies between the data streams code  your migration
code - it'll clash a little - it ought to cover everything you
have relating to the public API with exception of the new internal
method virDomainMigratePrepareTunnel

Regards,
Daniel

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-16 Thread Chris Lalancette
Daniel P. Berrange wrote:
 On Tue, Sep 15, 2009 at 02:35:02PM +0200, Chris Lalancette wrote:
 I've uploaded the code that I'm trying out at the moment to:

 http://gitorious.org/~clalance/libvirt/clalance-staging/commits/tunnelled-migration

 Dan, can you take a look and make any suggestions about where I might be 
 going
 wrong?  
 
 I've not look at your migration code yet, but there's a mistake in your
 change to the test driver.
 
 http://gitorious.org/~clalance/libvirt/clalance-staging/commit/e77dc1f1ba4e18b4fc6a70198c2f3b253609dc42
 
 The test driver is delibrately not using saferead/write because those
 helpers do not handle  EAGAIN. If you get EGAIN they'll return -1 and
 you are left with no idea how much data you've written which is not
 helpful :-) At very least this will cause the stream to terminate with
 an error message. If I got something wrong, perhaps its causing a crash.

Ah, I see.  I've switched that back, and switched over my tunnelled
implementation as well, but it doesn't seem to have an effect on my problem.

-- 
Chris Lalancette

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-16 Thread Daniel P. Berrange
On Wed, Sep 16, 2009 at 09:35:21AM +0200, Chris Lalancette wrote:
 Daniel P. Berrange wrote:
  On Tue, Sep 15, 2009 at 02:35:02PM +0200, Chris Lalancette wrote:
  I've uploaded the code that I'm trying out at the moment to:
 
  http://gitorious.org/~clalance/libvirt/clalance-staging/commits/tunnelled-migration
 
  Dan, can you take a look and make any suggestions about where I might be 
  going
  wrong?  
  
  I've not look at your migration code yet, but there's a mistake in your
  change to the test driver.
  
  http://gitorious.org/~clalance/libvirt/clalance-staging/commit/e77dc1f1ba4e18b4fc6a70198c2f3b253609dc42
  
  The test driver is delibrately not using saferead/write because those
  helpers do not handle  EAGAIN. If you get EGAIN they'll return -1 and
  you are left with no idea how much data you've written which is not
  helpful :-) At very least this will cause the stream to terminate with
  an error message. If I got something wrong, perhaps its causing a crash.
 
 Ah, I see.  I've switched that back, and switched over my tunnelled
 implementation as well, but it doesn't seem to have an effect on my problem.

I still see a safewrite() in the your virStreamWrite() impl in the
code currently pushed to gitorious.or, but perhaps you've changed
that locally already. The other thing is that if the stream  open
flags included VIR_STREAM_NONBLOCK, you must make sur eyou put your
socket in non-blocking mode, eg

if ((st-flags  VIR_STREAM_NONBLOCK) 
virSetNonBlock(create ? fds[1] : fds[0])  0) {
virReportSystemError(st-conn, errno, %s,
 _(cannot make stream non-blocking));
goto error;
}

in your stream open method. That shouldn't have caused a crash though - it
would merely make libvirtd non-responsive for a while it QEMU blocked
the incoming migration socket.


All in all though the code looks reasonable and I don't see any obvious
problems. I'll have to try running it to see if any crash appears

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-15 Thread Chris Lalancette
Daniel P. Berrange wrote:
 The immediate use case for this data stream code is Chris' QEMU
 migration patchset.
 
 The next use case is to allow serial console access to be tunnelled
 over libvirtd, eg to make  'virsh console GUEST' work remotely.
 This use case is why I included the support for non-blocking data
 streams and event loop integration (not required for Chris'
 migration use case)
 
 Anyway, assuming Chris confirms that I've not broken his code, then
 patches 1-6 are targetted for this next release.

I'm sorry for the very long delay in getting back to this.  I've been playing
around with my tunnelled migration patches on top of this code, and I just can't
seem to make the new nonblocking stuff work properly.  I'm getting a couple of
behaviors that are highly undesirable:

1)  Immediately after starting the stream, I get a virStreamRecv() callback on
the destination side.  The problem is that this is wrong for migration; there's
no data that I can read *from* the destination qemu process which makes any
sense.  While I could implement the method and just throw away the data, that
doesn't seem right to me.  This leads to...

2)  A crash in libvirtd on the source side of the destination.  It doesn't
happen every single time, but when it has happened I've traced it down to the
fact that src/remote_internal.c:remoteDomainEventFired() can get called *after*
conn-privateData has been set to NULL, leading to a SEGV on NULL pointer
dereference.  I can provide a core-dump for this if needed.

3) (minor) The python bindings refuse to build with these patches in place.
It's probably just a matter of fixing up the generator.py, but it needs to be 
done.

I've uploaded the code that I'm trying out at the moment to:

http://gitorious.org/~clalance/libvirt/clalance-staging/commits/tunnelled-migration

Dan, can you take a look and make any suggestions about where I might be going
wrong?  If you want to test out the tunnelled migration yourself, you'll need to
make sure you at least have the exec non-blocking patch (i.e. c/s
907500095851230a480b14bc852c4e49d32cb16d from the upstream qemu repo) in place.
 I have a qemu F-11 package with this patch in it available at:

http://people.redhat.com/clalance/qemu-exec-nonblock

-- 
Chris Lalancette

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/11] Generic data stream handling

2009-09-15 Thread Daniel P. Berrange
On Tue, Sep 15, 2009 at 02:35:02PM +0200, Chris Lalancette wrote:
 
 I've uploaded the code that I'm trying out at the moment to:
 
 http://gitorious.org/~clalance/libvirt/clalance-staging/commits/tunnelled-migration
 
 Dan, can you take a look and make any suggestions about where I might be going
 wrong?  

I've not look at your migration code yet, but there's a mistake in your
change to the test driver.

http://gitorious.org/~clalance/libvirt/clalance-staging/commit/e77dc1f1ba4e18b4fc6a70198c2f3b253609dc42

The test driver is delibrately not using saferead/write because those
helpers do not handle  EAGAIN. If you get EGAIN they'll return -1 and
you are left with no idea how much data you've written which is not
helpful :-) At very least this will cause the stream to terminate with
an error message. If I got something wrong, perhaps its causing a crash.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list