Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-10 Thread Ryo Tsuruta
Hi Anthony,

 There's an aio_init in block-raw-posix.c that sets the thread count to 
 1.  If you #if 0 out that block, or increase the threads to something 
 higher (like 16), you should see multiple simultaneous requests.  Sorry 
 about that, I had that in a different patch in my tree.

Thank you for your suggestion. I tried commenting out the calling
aio_init(), the AIO functions use aio_threads=20 and aio_num=64 as
default, but the results were the same as I have tested before, the
number of I/Os was divided into equal proportion.

Then, I changed io_throttle value to 1(default is 4). io_throttle
is a dm-ioband's configuration parameter, when the number of BIOs in
progress exceeds this value, dm-ioband starts to control the bandwidth. 
And then the bandwidth control seemed to work much better, but sometimes
I wasn't able to get the proper proportions. As these results indicate,
the number of simultaneous I/O requests issued from two KVM virtual
machines was less than 4.

 The number of issued I/Os for 60 seconds (io_throttle = 1)
 
| weight setting  |sda11 |   sda12 |  total  |
|  sda11 : sda12  |I/Os(%)   |   I/Os(%)   |  I/Os   |
|-+--+-|-|
| 80:20 #1|  7264(75.8%) | 2324(24.2%) |   9588  |
| 80:20 #2|  7147(71.1%) | 2899(28.9%) |  10046  |
|-+--+-|-|
| 50:50 #1|  5162(55.5%) | 4146(44.5%) |   9308  |
| 50:50 #2|  4666(49.3%) | 4793(50.7%) |   9459  |
|-+--+-|-|
| 20:80 #1|  3574(39.8%) | 5402(60.2%) |   8976  |
| 20:80 #2|  2038(21.4%) | 7499(78.6%) |   9537  |
|-+--+-|-|
| 10:90 #1|  2027(21.1%) | 7602(78.9%) |   9629  |
| 10:90 #2|  1556(16.4%) | 7935(83.6%) |   9491  |
 

 The virtio block backend isn't quite optimal right now.  I have some 
 patches (that are currently suffering bitrot) that switch over to 
 linux-aio which allows zero-copy and for proper barrier support (so the 
 guest block device will use an ordered queue).  The QEMU aio 
 infrastructure makes it tough to integrate it properly though.

That's good idea. I hope your work is going well and look forward to
releasing the patches.

Thanks,
Ryo Tsuruta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-06 Thread Ryo Tsuruta
Hi Anthony.

 The attached patch implements AIO support for the virtio backend so if this 
 is the case, you should see the proper proportions.

First, thank you very much for making the patch. 
I ran the same test program on KVM with the patch but I wasn't able to
get good results.
I checked the dm-ioband log, the I/Os didn't seem to be issued simultaneously.
It looked like the next I/O was blocked until the previous I/O was completed.

 The number of issued I/Os for 60 seconds
 
| device |   sda11   |   sda12   |
|  weight setting|80%|20%|
|---++---+---|
| KVM AIO   |  I/Os  |   4596|   4728|
|   | ratio to total |   49.3%   |   50.7%   |
|---++---+---|
| KVM   |  I/Os  |   5217|   5623|
|   | ratio to total |   48.1%   |   51.9%   |
 

Here is an another test result, which is very interesting.
I/Os were issued from a KVM virtual machine and from the host machine
simultaneously. 


The number of issued I/Os for 60 seconds
 
| issue from |  Virtual Machine  |Host Machine   |
|  device|   sda11   |   sda12   |
| weight setting |80%|20%|
|+---+---|
|  I/Os  |191|   9466|
| ratio to total |2.0%   |   98.0%   |
 

The most I/Os that were processed were the I/Os issued by the host machine.
There might exist another bottleneck somewhere as well.
Here is a block diagram representing the test.

+---+
| Virtual Machine   |
|   |
| Read/Write with O_DIRECT  | +--+
|   process x 128   | | Host Machine |  
| | | |  |
| V | | Read/Write with O_DIRECT |
| /dev/vda1 | |   process x 128  |
+-|-+ +-|+
+-V-V+
| /dev/mapper/ioband1  | /dev/mapper/ioband2 |
| 80% weight   | 20% weight  |
|  | |
|Control I/O bandwidth according to the weights  |
+-|-|+
+-V-+ +-|+
|/dev/sda11 | | /dev/sda12   |
+---+ +--+

Thanks,
Ryo Tsuruta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-06 Thread Anthony Liguori
Ryo Tsuruta wrote:
 Hi Anthony.

   
 The attached patch implements AIO support for the virtio backend so if this 
 is the case, you should see the proper proportions.
 

 First, thank you very much for making the patch. 
 I ran the same test program on KVM with the patch but I wasn't able to
 get good results.
 I checked the dm-ioband log, the I/Os didn't seem to be issued simultaneously.
   

There's an aio_init in block-raw-posix.c that sets the thread count to 
1.  If you #if 0 out that block, or increase the threads to something 
higher (like 16), you should see multiple simultaneous requests.  Sorry 
about that, I had that in a different patch in my tree.

 It looked like the next I/O was blocked until the previous I/O was completed.

  The number of issued I/Os for 60 seconds
  
 | device |   sda11   |   sda12   |
 |  weight setting|80%|20%|
 |---++---+---|
 | KVM AIO   |  I/Os  |   4596|   4728|
 |   | ratio to total |   49.3%   |   50.7%   |
 |---++---+---|
 | KVM   |  I/Os  |   5217|   5623|
 |   | ratio to total |   48.1%   |   51.9%   |
  

 Here is an another test result, which is very interesting.
 I/Os were issued from a KVM virtual machine and from the host machine
 simultaneously. 


 The number of issued I/Os for 60 seconds
  
 | issue from |  Virtual Machine  |Host Machine   |
 |  device|   sda11   |   sda12   |
 | weight setting |80%|20%|
 |+---+---|
 |  I/Os  |191|   9466|
 | ratio to total |2.0%   |   98.0%   |
  

 The most I/Os that were processed were the I/Os issued by the host machine.
 There might exist another bottleneck somewhere as well.
   

The virtio block backend isn't quite optimal right now.  I have some 
patches (that are currently suffering bitrot) that switch over to 
linux-aio which allows zero-copy and for proper barrier support (so the 
guest block device will use an ordered queue).  The QEMU aio 
infrastructure makes it tough to integrate it properly though.

Regards,

Anthony Liguori

 Here is a block diagram representing the test.

 +---+
 | Virtual Machine   |
 |   |
 | Read/Write with O_DIRECT  | +--+
 |   process x 128   | | Host Machine |  
 | | | |  |
 | V | | Read/Write with O_DIRECT |
 | /dev/vda1 | |   process x 128  |
 +-|-+ +-|+
 +-V-V+
 | /dev/mapper/ioband1  | /dev/mapper/ioband2 |
 | 80% weight   | 20% weight  |
 |  | |
 |Control I/O bandwidth according to the weights  |
 +-|-|+
 +-V-+ +-|+
 |/dev/sda11 | | /dev/sda12   |
 +---+ +--+

 Thanks,
 Ryo Tsuruta
   


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-05 Thread Ryo Tsuruta
Hi,

 If you are using virtio drivers in the guest (which I presume you are 
 given the reference to /dev/vda), try using the following -drive syntax:
 
 -drive file=/dev/mapper/ioband1,if=virtio,boot=on,cache=off
 
 This will force the use of O_DIRECT.  By default, QEMU does not open 
 with O_DIRECT so you'll see page cache effects.

I tried the test with cache=off option, here is the result. 

   The number of issued I/Os
 
| device |   sda11   |   sda12   |
|  weight setting|80%|20%|
|+---+---|
| KVM   |  I/Os  |   5217|   5623|
| cache=off | ratio to total |   48.1%   |   51.9%   |
|---++---+---|
| KVM   |  I/Os  |   4397|   2902|
| cache=on  | ratio to total |   60.2%   |   39.8%   |
|---++---+---|
|local  |  I/Os  |   5447|   1314|
|   | ratio to total |   80.6%   |   19.4%   |
 

I could see the page cache effect through the /sys/block/sda/sda[12]/stat,
O_DIRECT was taking effect. However, the bandwidth control didn't work.

I also did another test. The difference from the previous test is that
the weights were assigned on a per partition basis instead of a per
cgroup basis.
It worked fine on Xen and local processes, but unfortunately it didn't
work on KVM.

   The number of issued I/Os
The weights were assigned on a per partition basis
 
| device |   sda11   |   sda12   |
|  weight setting|80%|20%|
|+---+---|
| KVM   |  I/Os  |   5905|   5873|
| cache=off | ratio to total |   50.1%   |   49.9%   |
|---++---+---|
|local  |  I/Os  |   6929|   1629|
|   | ratio to total |   81.0%   |   19.0%   |
||
| Xen   |  I/Os  |   8534|   2360|
|   | ratio to total |   78.3%   |   21.7%   |
 

I don't understand what was going on. I'd be appreciate if you could give
me other suggestions.

Thanks,
Ryo Tsuruta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-05 Thread Anthony Liguori

Ryo Tsuruta wrote:

Hi,

  
If you are using virtio drivers in the guest (which I presume you are 
given the reference to /dev/vda), try using the following -drive syntax:


-drive file=/dev/mapper/ioband1,if=virtio,boot=on,cache=off

This will force the use of O_DIRECT.  By default, QEMU does not open 
with O_DIRECT so you'll see page cache effects.



I tried the test with cache=off option, here is the result. 
  


Can you give the attached patch a try?  The virtio backend does 
synchronous IO requests blocking the guest from making progress until 
the IO completes.  It's possible that what you're seeing is the 
scheduler competing with your IO bandwidth limiting in order to ensure 
fairness since IO completion is intimately tied to CPU consumption 
(since we're using blocking IO).


The attached patch implements AIO support for the virtio backend so if 
this is the case, you should see the proper proportions.


Regards,

Anthony Liguori
diff --git a/qemu/hw/virtio-blk.c b/qemu/hw/virtio-blk.c
index 301b5a1..3c56bed 100644
--- a/qemu/hw/virtio-blk.c
+++ b/qemu/hw/virtio-blk.c
@@ -71,59 +71,121 @@ typedef struct VirtIOBlock
 BlockDriverState *bs;
 } VirtIOBlock;
 
+typedef struct VBDMARequestState VBDMARequestState;
+
+typedef struct VBDMAState
+{
+VirtQueueElement elem;
+int count;
+int is_write;
+unsigned int wlen;
+VirtQueue *vq;
+VirtIODevice *vdev;
+VBDMARequestState *requests;
+} VBDMAState;
+
+struct VBDMARequestState
+{
+VBDMAState *dma;
+BlockDriverAIOCB *aiocb;
+VBDMARequestState *next;
+};
+
 static VirtIOBlock *to_virtio_blk(VirtIODevice *vdev)
 {
 return (VirtIOBlock *)vdev;
 }
 
+static void virtio_io_completion(void *opaque, int ret)
+{
+VBDMARequestState *req = opaque, **ppreq;
+VBDMAState *dma = req-dma;
+struct virtio_blk_inhdr *in;
+
+for (ppreq = dma-requests; *ppreq; ppreq = (*ppreq)-next) {
+	if (*ppreq == req) { 
+	*ppreq = req-next;
+	break;
+	}
+}
+
+qemu_free(req);
+
+if (dma-requests)
+	return;
+
+in = (void *)dma-elem.in_sg[dma-elem.in_num - 1].iov_base;
+dma-wlen += sizeof(*in);
+if (ret == -EOPNOTSUPP)
+	in-status = VIRTIO_BLK_S_UNSUPP;
+else
+	in-status = VIRTIO_BLK_S_OK;
+virtqueue_push(dma-vq, dma-elem, dma-wlen);
+virtio_notify(dma-vdev, dma-vq);
+qemu_free(dma);
+}
+
 static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
 {
 VirtIOBlock *s = to_virtio_blk(vdev);
-VirtQueueElement elem;
+VBDMAState *dma = qemu_mallocz(sizeof(VBDMAState));
 unsigned int count;
 
-while ((count = virtqueue_pop(vq, elem)) != 0) {
-	struct virtio_blk_inhdr *in;
+while ((count = virtqueue_pop(vq, dma-elem)) != 0) {
 	struct virtio_blk_outhdr *out;
-	unsigned int wlen;
+	VBDMARequestState *req;
 	off_t off;
 	int i;
 
-	out = (void *)elem.out_sg[0].iov_base;
-	in = (void *)elem.in_sg[elem.in_num - 1].iov_base;
+	out = (void *)dma-elem.out_sg[0].iov_base;
 	off = out-sector;
 
+	dma-vq = vq;
+	dma-vdev = vdev;
+
 	if (out-type  VIRTIO_BLK_T_SCSI_CMD) {
-	wlen = sizeof(*in);
-	in-status = VIRTIO_BLK_S_UNSUPP;
+	req = qemu_mallocz(sizeof(VBDMARequestState));
+	req-dma = dma;
+	req-next = dma-requests;
+	dma-requests = req;
+	virtio_io_completion(req, -EOPNOTSUPP);
 	} else if (out-type  VIRTIO_BLK_T_OUT) {
-	wlen = sizeof(*in);
-
-	for (i = 1; i  elem.out_num; i++) {
-		bdrv_write(s-bs, off,
-			   elem.out_sg[i].iov_base,
-			   elem.out_sg[i].iov_len / 512);
-		off += elem.out_sg[i].iov_len / 512;
+	dma-count = dma-elem.out_num - 1;
+	dma-is_write = 1;
+	for (i = 1; i  dma-elem.out_num; i++) {
+		req = qemu_mallocz(sizeof(VBDMARequestState));
+		req-dma = dma;
+		req-next = dma-requests;
+		dma-requests = req;
+
+		req-aiocb = bdrv_aio_write(s-bs, off,
+	dma-elem.out_sg[i].iov_base,
+	dma-elem.out_sg[i].iov_len / 512,
+	virtio_io_completion, req);
+		off += dma-elem.out_sg[i].iov_len / 512;
 	}
-
-	in-status = VIRTIO_BLK_S_OK;
 	} else {
-	wlen = sizeof(*in);
-
-	for (i = 0; i  elem.in_num - 1; i++) {
-		bdrv_read(s-bs, off,
-			  elem.in_sg[i].iov_base,
-			  elem.in_sg[i].iov_len / 512);
-		off += elem.in_sg[i].iov_len / 512;
-		wlen += elem.in_sg[i].iov_len;
+	dma-count = dma-elem.in_num - 1;
+	dma-is_write = 0;
+	for (i = 0; i  dma-elem.in_num - 1; i++) {
+		req = qemu_mallocz(sizeof(VBDMARequestState));
+		req-dma = dma;
+		req-next = dma-requests;
+		dma-requests = req;
+
+		req-aiocb = bdrv_aio_read(s-bs, off,
+	   dma-elem.in_sg[i].iov_base,
+	   dma-elem.in_sg[i].iov_len / 512,
+	   virtio_io_completion, req);
+		off += dma-elem.in_sg[i].iov_len / 512;
+		dma-wlen += dma-elem.in_sg[i].iov_len;
 	}
-
-	in-status = VIRTIO_BLK_S_OK;
 	}
 
-	virtqueue_push(vq, elem, wlen);
-	virtio_notify(vdev, vq);
+	dma = qemu_mallocz(sizeof(VBDMAState));
 }
+
+qemu_free(dma);
 }
 
 static void 

Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-03 Thread Ryo Tsuruta
Hi,

 If you are using virtio drivers in the guest (which I presume you are given 
 the reference to /dev/vda), try using the following -drive syntax:

 -drive file=/dev/mapper/ioband1,if=virtio,boot=on,cache=off

 This will force the use of O_DIRECT.  By default, QEMU does not open with 
 O_DIRECT so you'll see page cache effects.

Thank you for your suggestion.
I was using virtio drives as you wrote.
I just thought that kvm would use O_DIRECT option when applications on
the guest opened files with O_DIRECT option.
I'll try the way you mentioned and report back.

Thanks,
Ryo Tsuruta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-02 Thread Avi Kivity
Anthony Liguori wrote:
 Hi Ryo,

 Ryo Tsuruta wrote:
   
 Hello all,

 I've implemented a block device which throttles block I/O bandwidth, 
 which I called dm-ioband, and been trying to throttle I/O bandwidth on
 KVM environment. But unfortunately it doesn't work well, the number of
 issued I/Os is not according to the bandwidth setting.
 On the other hand, I got the good result when accessing directly to
 the local disk on the local machine.

 I'm not so familiar with KVM. Could anyone give me any advice?
 

 If you are using virtio drivers in the guest (which I presume you are 
 given the reference to /dev/vda), try using the following -drive syntax:

 -drive file=/dev/mapper/ioband1,if=virtio,boot=on,cache=off

 This will force the use of O_DIRECT.  By default, QEMU does not open 
 with O_DIRECT so you'll see page cache effects.

   

Good point.  But IIRC cache=off is not limited to virtio?


-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-02 Thread Anthony Liguori
Avi Kivity wrote:
 Anthony Liguori wrote:

 If you are using virtio drivers in the guest (which I presume you are 
 given the reference to /dev/vda), try using the following -drive syntax:

 -drive file=/dev/mapper/ioband1,if=virtio,boot=on,cache=off

 This will force the use of O_DIRECT.  By default, QEMU does not open 
 with O_DIRECT so you'll see page cache effects.

   

 Good point.  But IIRC cache=off is not limited to virtio?

Nope.  I just wanted to give the exact syntax to use and it looked like 
he was using virtio.

Regards,

Anthony Liguori


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] I/O bandwidth control on KVM

2008-03-01 Thread Anthony Liguori
Hi Ryo,

Ryo Tsuruta wrote:
 Hello all,
 
 I've implemented a block device which throttles block I/O bandwidth, 
 which I called dm-ioband, and been trying to throttle I/O bandwidth on
 KVM environment. But unfortunately it doesn't work well, the number of
 issued I/Os is not according to the bandwidth setting.
 On the other hand, I got the good result when accessing directly to
 the local disk on the local machine.
 
 I'm not so familiar with KVM. Could anyone give me any advice?

If you are using virtio drivers in the guest (which I presume you are 
given the reference to /dev/vda), try using the following -drive syntax:

-drive file=/dev/mapper/ioband1,if=virtio,boot=on,cache=off

This will force the use of O_DIRECT.  By default, QEMU does not open 
with O_DIRECT so you'll see page cache effects.

Regards,

Anthony Liguori

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


[kvm-devel] I/O bandwidth control on KVM

2008-02-29 Thread Ryo Tsuruta
Hello all,

I've implemented a block device which throttles block I/O bandwidth, 
which I called dm-ioband, and been trying to throttle I/O bandwidth on
KVM environment. But unfortunately it doesn't work well, the number of
issued I/Os is not according to the bandwidth setting.
On the other hand, I got the good result when accessing directly to
the local disk on the local machine.

I'm not so familiar with KVM. Could anyone give me any advice?

For dm-ioband details, please see the website at
http://people.valinux.co.jp/~ryov/dm-ioband/

   The number of issued I/Os
 --
|   device |   sda11   |   sda12   |
|weight setting|80%|20%|
|--+---+---|
| KVM |  I/Os  |   4397|   2902|
| | ratio to total |   60.2%   |   39.8%   |
|-++---+---|
|local|  I/Os  |   5447|   1314|
| | ratio to total |   80.6%   |   19.4%   |
 --

The test environment and the procedure are as follow:

  o Prepare two partitions sda11 and sda12.
  o Create two bandwidth control devices, each device is mapped to the
sda11 and sda12 respectively.
  o Give weights of 80 and 20 to each bandwidth control device respectively.
  o Run two virtual machines, the virtual machine's disk is mapped to
the each bandwidth control device.
  o Run 128 processes issuing random read/write direct I/O with 4KB data
on each virtual machine at the same time respectively.
  o Count up the number of I/Os which have done in 60 seconds.

Access through KVM
  +---+ +--+
  | Virtual Machine 1 (VM1)   | | Virtual Machine 2 (VM2)  |
  |in cgroup ioband1| |in cgroup ioband2   |
  |   | |  |
  | Read/Write with O_DIRECT  | | Read/Write with O_DIRECT |
  |   process x 128   | |   process x 128  |  
  | | | | ||
  | V | | V|
  | /dev/vda1 | | /dev/vda1|
  +-|-+ +-|+
  +-V-V+
  | /dev/mapper/ioband1  | /dev/mapper/ioband2 |
  | 80% for cgroup ioband1 | 20% for cgroup ioband2|
  |  | |
  |Control I/O bandwidth according to the cgroup tasks |
  +-|-|+
  +-V-+ +-|+
  |/dev/sda11 | | /dev/sda12   |
  +---+ +--+

  Direct access
  +---+ +--+
  | cgroup ioband1  | | cgroup ioband2 |
  |   | |  |
  | Read/Write with O_DIRECT  | | Read/Write with O_DIRECT |
  |   process x 128   | |   process x 128  |  
  | | | | ||
  +-|-+ +-|+
  +-V-V+
  | /dev/mapper/ioband1  |/dev/mapper/ioband2  |
  | 80% for cgroup ioband1 | 20% for cgroup ioband2|
  |  | |
  | Control I/O bandwidth according to the cgroup tasks|
  +-|-|+
  +-V-+ +-|+
  |/dev/sda11 | | /dev/sda12   |
  +---+ +--+

Thanks,
Ryo Tsuruta

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel