Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-05 Thread Asias He

On 07/04/2012 10:40 AM, Rusty Russell wrote:

On Tue, 03 Jul 2012 08:39:39 +0800, Asias He  wrote:

On 07/02/2012 02:41 PM, Rusty Russell wrote:

Sure, our guest merging might save us 100x as many exits as no merging.
But since we're not doing many requests, does it matter?


We can still have many requests with slow devices. The number of
requests depends on the workload in guest. E.g. 512 IO threads in guest
keeping doing IO.


You can have many requests outstanding.  But if the device is slow, the
rate of requests being serviced must be low.


Yes.


Am I misunderstanding something?  I thought if you could have a high
rate of requests, it's not a slow device.


Sure.

--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-04 Thread Paolo Bonzini
Il 03/07/2012 16:28, Dor Laor ha scritto:
 Users using a spinning disk still get IO scheduling in the host though.
 What benefit is there in doing it in the guest as well?
>>>
>>> The io scheduler waits for requests to merge and thus batch IOs
>>> together. It's not important w.r.t spinning disks since the host can
>>> do it but it causes much less vmexits which is the key issue for VMs.
>>
>> Does it make sense to use the guest's I/O scheduler at all?
> 
> That's the reason we have a noop io scheduler.

But is performance really better with noop?  We had to revert usage of
QUEUE_FLAG_NONROT in the guests because it caused performance
degradation (commit f8b12e513b953aebf30f8ff7d2de9be7e024dbbe).

The bio-based path is really QUEUE_FLAG_NONROT++, so it should really be
a special case for people who know what they're doing.  (Better would be
to improve QEMU, there's definitely room for 20% improvement...).

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-03 Thread Rusty Russell
On Tue, 03 Jul 2012 08:39:39 +0800, Asias He  wrote:
> On 07/02/2012 02:41 PM, Rusty Russell wrote:
> > Sure, our guest merging might save us 100x as many exits as no merging.
> > But since we're not doing many requests, does it matter?
> 
> We can still have many requests with slow devices. The number of 
> requests depends on the workload in guest. E.g. 512 IO threads in guest 
> keeping doing IO.

You can have many requests outstanding.  But if the device is slow, the
rate of requests being serviced must be low.

Am I misunderstanding something?  I thought if you could have a high
rate of requests, it's not a slow device.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-03 Thread Dor Laor

On 07/03/2012 05:22 PM, Ronen Hod wrote:

On 06/18/2012 02:14 PM, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the
IO scheduler in guest kernel, e.g. users using spinning disk. For users
using fast disk device, e.g. SSD device, they can use bio-based IO
path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the host can
do it but it causes much less vmexits which is the key issue for VMs.


Does it make sense to use the guest's I/O scheduler at all?


That's the reason we have a noop io scheduler.


- It is not aware of the physical (spinning) disk layout.
- It is not aware of all the host's disk pending requests.
It does have a good side-effect - batching of requests.

Ronen.





Cheers,
Rusty.
___
Virtualization mailing list
virtualizat...@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html





--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-03 Thread Ronen Hod

On 06/18/2012 02:14 PM, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the
IO scheduler in guest kernel, e.g. users using spinning disk. For users
using fast disk device, e.g. SSD device, they can use bio-based IO path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs together. It's 
not important w.r.t spinning disks since the host can do it but it causes much 
less vmexits which is the key issue for VMs.


Does it make sense to use the guest's I/O scheduler at all?
- It is not aware of the physical (spinning) disk layout.
- It is not aware of all the host's disk pending requests.
It does have a good side-effect - batching of requests.

Ronen.





Cheers,
Rusty.
___
Virtualization mailing list
virtualizat...@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-03 Thread Asias He

On 07/03/2012 09:31 PM, Paolo Bonzini wrote:

Il 02/07/2012 08:41, Rusty Russell ha scritto:

With the same workload in guest, the guest fires 200K requests to host
with merges enabled in guest (echo 0 > /sys/block/vdb/queue/nomerges),
while the guest fires 4K requests to host with merges disabled in
guest (echo 2 > /sys/block/vdb/queue/nomerges). This show that the merge
in block layer reduces the total number of requests fire to host a lot
(4K / 200K = 20).



4 / 200 is 200, not 20. :)


Crap, I wrote one more zero here. Actually, it is 4000K. So the factor 
is still 4000K/200k = 20 ;-)


--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-03 Thread Paolo Bonzini
Il 02/07/2012 08:41, Rusty Russell ha scritto:
>> With the same workload in guest, the guest fires 200K requests to host 
>> with merges enabled in guest (echo 0 > /sys/block/vdb/queue/nomerges), 
>> while the guest fires 4K requests to host with merges disabled in 
>> guest (echo 2 > /sys/block/vdb/queue/nomerges). This show that the merge 
>> in block layer reduces the total number of requests fire to host a lot 
>> (4K / 200K = 20).
>> 

4 / 200 is 200, not 20. :)

Paolo

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-02 Thread Asias He

On 07/02/2012 02:41 PM, Rusty Russell wrote:

On Mon, 02 Jul 2012 10:45:05 +0800, Asias He  wrote:

On 07/02/2012 07:54 AM, Rusty Russell wrote:

Confused.  So, without merging we get 6k exits (per second?)  How many
do we get when we use the request-based IO path?


Sorry for the confusion. The numbers were collected from request-based
IO path where we can have the guest block layer merge the requests.

With the same workload in guest, the guest fires 200K requests to host
with merges enabled in guest (echo 0 > /sys/block/vdb/queue/nomerges),
while the guest fires 4K requests to host with merges disabled in
guest (echo 2 > /sys/block/vdb/queue/nomerges). This show that the merge
in block layer reduces the total number of requests fire to host a lot
(4K / 200K = 20).

The guest fires 200K requests to host with merges enabled in guest (echo
0 > /sys/block/vdb/queue/nomerges), the host fires 6K interrupts in
total for the 200K requests. This show that the ratio of interrupts
coalesced (200K / 6K = 33).


OK, got it!  Guest merging cuts requests by a factor of 20.  EVENT_IDX
cuts interrupts by a factor of 33.


Yes.




If your device is slow, then you won't be able to make many requests per
second: why worry about exit costs?


If a device is slow, the merge would merge more requests and reduce the
total number of requests to host. This saves exit costs, no?


Sure, our guest merging might save us 100x as many exits as no merging.
But since we're not doing many requests, does it matter?


We can still have many requests with slow devices. The number of 
requests depends on the workload in guest. E.g. 512 IO threads in guest 
keeping doing IO.



Ideally we'd merge requests only if the device queue is full.  But that
sounds hard.


If your device is fast (eg. ram),
you've already shown that your patch is a win, right?


Yes. Both on ramdisk and fast SSD device (e.g. FusionIO).


Thanks,
Rusty.




--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-02 Thread Rusty Russell
On Mon, 02 Jul 2012 10:45:05 +0800, Asias He  wrote:
> On 07/02/2012 07:54 AM, Rusty Russell wrote:
> > Confused.  So, without merging we get 6k exits (per second?)  How many
> > do we get when we use the request-based IO path?
> 
> Sorry for the confusion. The numbers were collected from request-based 
> IO path where we can have the guest block layer merge the requests.
> 
> With the same workload in guest, the guest fires 200K requests to host 
> with merges enabled in guest (echo 0 > /sys/block/vdb/queue/nomerges), 
> while the guest fires 4K requests to host with merges disabled in 
> guest (echo 2 > /sys/block/vdb/queue/nomerges). This show that the merge 
> in block layer reduces the total number of requests fire to host a lot 
> (4K / 200K = 20).
> 
> The guest fires 200K requests to host with merges enabled in guest (echo 
> 0 > /sys/block/vdb/queue/nomerges), the host fires 6K interrupts in 
> total for the 200K requests. This show that the ratio of interrupts 
> coalesced (200K / 6K = 33).

OK, got it!  Guest merging cuts requests by a factor of 20.  EVENT_IDX
cuts interrupts by a factor of 33.

> > If your device is slow, then you won't be able to make many requests per
> > second: why worry about exit costs?
> 
> If a device is slow, the merge would merge more requests and reduce the 
> total number of requests to host. This saves exit costs, no?

Sure, our guest merging might save us 100x as many exits as no merging.
But since we're not doing many requests, does it matter?

Ideally we'd merge requests only if the device queue is full.  But that
sounds hard.

> > If your device is fast (eg. ram),
> > you've already shown that your patch is a win, right?
> 
> Yes. Both on ramdisk and fast SSD device (e.g. FusionIO).

Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-01 Thread Asias He

On 07/02/2012 07:54 AM, Rusty Russell wrote:

On Tue, 19 Jun 2012 10:51:18 +0800, Asias He  wrote:

On 06/18/2012 07:39 PM, Sasha Levin wrote:

On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the
IO scheduler in guest kernel, e.g. users using spinning disk. For users
using fast disk device, e.g. SSD device, they can use bio-based IO path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the host can do
it but it causes much less vmexits which is the key issue for VMs.


Is the amount of exits caused by virtio-blk significant at all with
EVENT_IDX?


Yes. EVENT_IDX saves the number of notify and interrupt. Let's take the
interrupt as an example, The guest fires 200K request to host, the
number of interrupt is about 6K thanks to EVENT_IDX. The ratio is 200K /
6K = 33. The ratio of merging is 4K / 200K = 20.


Confused.  So, without merging we get 6k exits (per second?)  How many
do we get when we use the request-based IO path?


Sorry for the confusion. The numbers were collected from request-based 
IO path where we can have the guest block layer merge the requests.


With the same workload in guest, the guest fires 200K requests to host 
with merges enabled in guest (echo 0 > /sys/block/vdb/queue/nomerges), 
while the guest fires 4K requests to host with merges disabled in 
guest (echo 2 > /sys/block/vdb/queue/nomerges). This show that the merge 
in block layer reduces the total number of requests fire to host a lot 
(4K / 200K = 20).


The guest fires 200K requests to host with merges enabled in guest (echo 
0 > /sys/block/vdb/queue/nomerges), the host fires 6K interrupts in 
total for the 200K requests. This show that the ratio of interrupts 
coalesced (200K / 6K = 33).




If your device is slow, then you won't be able to make many requests per
second: why worry about exit costs?


If a device is slow, the merge would merge more requests and reduce the 
total number of requests to host. This saves exit costs, no?



If your device is fast (eg. ram),
you've already shown that your patch is a win, right?


Yes. Both on ramdisk and fast SSD device (e.g. FusionIO).

--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-07-01 Thread Rusty Russell
On Tue, 19 Jun 2012 10:51:18 +0800, Asias He  wrote:
> On 06/18/2012 07:39 PM, Sasha Levin wrote:
> > On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:
> >> On 06/18/2012 01:05 PM, Rusty Russell wrote:
> >>> On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:
>  On 06/18/2012 03:46 PM, Rusty Russell wrote:
> > On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:
> >> This patch introduces bio-based IO path for virtio-blk.
> >
> > Why make it optional?
> 
>  request-based IO path is useful for users who do not want to bypass the
>  IO scheduler in guest kernel, e.g. users using spinning disk. For users
>  using fast disk device, e.g. SSD device, they can use bio-based IO path.
> >>>
> >>> Users using a spinning disk still get IO scheduling in the host though.
> >>> What benefit is there in doing it in the guest as well?
> >>
> >> The io scheduler waits for requests to merge and thus batch IOs
> >> together. It's not important w.r.t spinning disks since the host can do
> >> it but it causes much less vmexits which is the key issue for VMs.
> >
> > Is the amount of exits caused by virtio-blk significant at all with
> > EVENT_IDX?
> 
> Yes. EVENT_IDX saves the number of notify and interrupt. Let's take the 
> interrupt as an example, The guest fires 200K request to host, the 
> number of interrupt is about 6K thanks to EVENT_IDX. The ratio is 200K / 
> 6K = 33. The ratio of merging is 4K / 200K = 20.

Confused.  So, without merging we get 6k exits (per second?)  How many
do we get when we use the request-based IO path?

If your device is slow, then you won't be able to make many requests per
second: why worry about exit costs?  If your device is fast (eg. ram),
you've already shown that your patch is a win, right?

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-21 Thread Dor Laor

On 06/20/2012 07:46 AM, Asias He wrote:

On 06/19/2012 02:21 PM, Dor Laor wrote:

On 06/19/2012 05:51 AM, Asias He wrote:

On 06/18/2012 07:39 PM, Sasha Levin wrote:

On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He
wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass
the
IO scheduler in guest kernel, e.g. users using spinning disk. For
users
using fast disk device, e.g. SSD device, they can use bio-based IO
path.


Users using a spinning disk still get IO scheduling in the host
though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the host
can do
it but it causes much less vmexits which is the key issue for VMs.


Is the amount of exits caused by virtio-blk significant at all with
EVENT_IDX?


Yes. EVENT_IDX saves the number of notify and interrupt. Let's take the
interrupt as an example, The guest fires 200K request to host, the
number of interrupt is about 6K thanks to EVENT_IDX. The ratio is 200K /
6K = 33. The ratio of merging is 4K / 200K = 20.



In this case, why don't you always recommend bio over request based?


This case shows that IO scheduler's merging in guest saves a lot of
requests to host side. Why should I recommend bio over request based here?



Does it merge 20 request _on top_ of what event idx does? Of course if 
that's the case, we should keep that.


Dor

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-19 Thread Asias He

On 06/19/2012 02:21 PM, Dor Laor wrote:

On 06/19/2012 05:51 AM, Asias He wrote:

On 06/18/2012 07:39 PM, Sasha Levin wrote:

On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He
wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass
the
IO scheduler in guest kernel, e.g. users using spinning disk. For
users
using fast disk device, e.g. SSD device, they can use bio-based IO
path.


Users using a spinning disk still get IO scheduling in the host
though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the host can do
it but it causes much less vmexits which is the key issue for VMs.


Is the amount of exits caused by virtio-blk significant at all with
EVENT_IDX?


Yes. EVENT_IDX saves the number of notify and interrupt. Let's take the
interrupt as an example, The guest fires 200K request to host, the
number of interrupt is about 6K thanks to EVENT_IDX. The ratio is 200K /
6K = 33. The ratio of merging is 4K / 200K = 20.



In this case, why don't you always recommend bio over request based?


This case shows that IO scheduler's merging in guest saves a lot of 
requests to host side. Why should I recommend bio over request based here?


--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Dor Laor

On 06/19/2012 05:51 AM, Asias He wrote:

On 06/18/2012 07:39 PM, Sasha Levin wrote:

On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass
the
IO scheduler in guest kernel, e.g. users using spinning disk. For
users
using fast disk device, e.g. SSD device, they can use bio-based IO
path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the host can do
it but it causes much less vmexits which is the key issue for VMs.


Is the amount of exits caused by virtio-blk significant at all with
EVENT_IDX?


Yes. EVENT_IDX saves the number of notify and interrupt. Let's take the
interrupt as an example, The guest fires 200K request to host, the
number of interrupt is about 6K thanks to EVENT_IDX. The ratio is 200K /
6K = 33. The ratio of merging is 4K / 200K = 20.



In this case, why don't you always recommend bio over request based?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Asias He

On 06/18/2012 07:39 PM, Sasha Levin wrote:

On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the
IO scheduler in guest kernel, e.g. users using spinning disk. For users
using fast disk device, e.g. SSD device, they can use bio-based IO path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the host can do
it but it causes much less vmexits which is the key issue for VMs.


Is the amount of exits caused by virtio-blk significant at all with
EVENT_IDX?


Yes. EVENT_IDX saves the number of notify and interrupt. Let's take the 
interrupt as an example, The guest fires 200K request to host, the 
number of interrupt is about 6K thanks to EVENT_IDX. The ratio is 200K / 
6K = 33. The ratio of merging is 4K / 200K = 20.


--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Asias He

On 06/18/2012 06:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the
IO scheduler in guest kernel, e.g. users using spinning disk. For users
using fast disk device, e.g. SSD device, they can use bio-based IO path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


Merging in guest kernel's IO scheduling can reduce the number of 
requests guest fires to host side. For instance, with the same workload 
in guest side, the number of request drops to ~200K from ~4000K with 
guest kernel's merging in qemu.




Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/




--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Asias He

On 06/18/2012 06:21 PM, Michael S. Tsirkin wrote:

On Mon, Jun 18, 2012 at 02:53:10PM +0800, Asias He wrote:

+static void virtblk_make_request(struct request_queue *q, struct bio *bio)
+{
+   struct virtio_blk *vblk = q->queuedata;
+   unsigned int num, out = 0, in = 0;
+   struct virtblk_req *vbr;
+
+   BUG_ON(bio->bi_phys_segments + 2 > vblk->sg_elems);
+   BUG_ON(bio->bi_rw & (REQ_FLUSH | REQ_FUA));
+
+   vbr = virtblk_alloc_req(vblk, GFP_NOIO);
+   if (!vbr) {
+   bio_endio(bio, -ENOMEM);
+   return;
+   }
+
+   vbr->bio = bio;
+   vbr->req = NULL;
+   vbr->out_hdr.type = 0;
+   vbr->out_hdr.sector = bio->bi_sector;
+   vbr->out_hdr.ioprio = bio_prio(bio);
+
+   sg_set_buf(&vbr->sg[out++], &vbr->out_hdr, sizeof(vbr->out_hdr));
+
+   num = blk_bio_map_sg(q, bio, vbr->sg + out);
+
+   sg_set_buf(&vbr->sg[num + out + in++], &vbr->status,
+  sizeof(vbr->status));
+
+   if (num) {
+   if (bio->bi_rw & REQ_WRITE) {
+   vbr->out_hdr.type |= VIRTIO_BLK_T_OUT;
+   out += num;
+   } else {
+   vbr->out_hdr.type |= VIRTIO_BLK_T_IN;
+   in += num;
+   }
+   }
+
+   spin_lock_irq(vblk->disk->queue->queue_lock);
+   if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr,
+ GFP_ATOMIC) < 0) {
+   spin_unlock_irq(vblk->disk->queue->queue_lock);


Any implications of dropping lock like that?
E.g. for suspend. like we are still discussing with
unlocked kick?


+   virtblk_add_buf_wait(vblk, vbr, out, in);
+   } else {
+   virtqueue_kick(vblk->vq);


Why special case the first call? task state manipulation so expensive?


Hmm. Will switch them.




+   spin_unlock_irq(vblk->disk->queue->queue_lock);
+   }
+}
+

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Asias He

On 06/18/2012 06:13 PM, Michael S. Tsirkin wrote:

On Mon, Jun 18, 2012 at 04:03:23PM +0800, Asias He wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass
the IO scheduler in guest kernel, e.g. users using spinning disk.
For users using fast disk device, e.g. SSD device, they can use
bio-based IO path.


OK I guess but then it should be per-device. There could be
a mix of slow and fast disks :)


Yes, per-device might be useful. There are issues which need solving.
- How do we tell the drive which IO path to use
  - Device add some flag
- Old qemu/lkvm can not turn this feature on
  - Through /sys filesystem attribute
- How do we handle the switch from one path to anther.

So, let's add the per-device feature later.

--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Tejun Heo
Hello, guys.

On Mon, Jun 18, 2012 at 07:35:22PM +0930, Rusty Russell wrote:
> On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:
> > On 06/18/2012 03:46 PM, Rusty Russell wrote:
> > > On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:
> > >> This patch introduces bio-based IO path for virtio-blk.
> > >
> > > Why make it optional?
> > 
> > request-based IO path is useful for users who do not want to bypass the 
> > IO scheduler in guest kernel, e.g. users using spinning disk. For users 
> > using fast disk device, e.g. SSD device, they can use bio-based IO path.
> 
> Users using a spinning disk still get IO scheduling in the host though.
> What benefit is there in doing it in the guest as well?

Another thing is that some of cgroup features are impelmented in IO
scheduler (cfq).  e.g. bio based driver will be able to use cgroup
based throttling but IO weights won't work.  Not sure how meaningful
this is tho.

With that said, I think this sort of feature switching is quite ugly.
The pros and cons of each choice aren't obvious unless one is familiar
with implementation details.  IMHO, if the benefits of ioscheds aren't
critical, it would be better to just go with bio based implementation.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Sasha Levin
On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:
> On 06/18/2012 01:05 PM, Rusty Russell wrote:
> > On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:
> >> On 06/18/2012 03:46 PM, Rusty Russell wrote:
> >>> On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:
>  This patch introduces bio-based IO path for virtio-blk.
> >>>
> >>> Why make it optional?
> >>
> >> request-based IO path is useful for users who do not want to bypass the
> >> IO scheduler in guest kernel, e.g. users using spinning disk. For users
> >> using fast disk device, e.g. SSD device, they can use bio-based IO path.
> >
> > Users using a spinning disk still get IO scheduling in the host though.
> > What benefit is there in doing it in the guest as well?
> 
> The io scheduler waits for requests to merge and thus batch IOs 
> together. It's not important w.r.t spinning disks since the host can do 
> it but it causes much less vmexits which is the key issue for VMs. 

Is the amount of exits caused by virtio-blk significant at all with
EVENT_IDX?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Dor Laor

On 06/18/2012 01:05 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the
IO scheduler in guest kernel, e.g. users using spinning disk. For users
using fast disk device, e.g. SSD device, they can use bio-based IO path.


Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?


The io scheduler waits for requests to merge and thus batch IOs 
together. It's not important w.r.t spinning disks since the host can do 
it but it causes much less vmexits which is the key issue for VMs.




Cheers,
Rusty.
___
Virtualization mailing list
virtualizat...@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Stefan Hajnoczi
On Mon, Jun 18, 2012 at 11:21 AM, Michael S. Tsirkin  wrote:
> On Mon, Jun 18, 2012 at 02:53:10PM +0800, Asias He wrote:
>> +static void virtblk_make_request(struct request_queue *q, struct bio *bio)
>> +{
>> +     struct virtio_blk *vblk = q->queuedata;
>> +     unsigned int num, out = 0, in = 0;
>> +     struct virtblk_req *vbr;
>> +
>> +     BUG_ON(bio->bi_phys_segments + 2 > vblk->sg_elems);
>> +     BUG_ON(bio->bi_rw & (REQ_FLUSH | REQ_FUA));
>> +
>> +     vbr = virtblk_alloc_req(vblk, GFP_NOIO);
>> +     if (!vbr) {
>> +             bio_endio(bio, -ENOMEM);
>> +             return;
>> +     }
>> +
>> +     vbr->bio = bio;
>> +     vbr->req = NULL;
>> +     vbr->out_hdr.type = 0;
>> +     vbr->out_hdr.sector = bio->bi_sector;
>> +     vbr->out_hdr.ioprio = bio_prio(bio);
>> +
>> +     sg_set_buf(&vbr->sg[out++], &vbr->out_hdr, sizeof(vbr->out_hdr));
>> +
>> +     num = blk_bio_map_sg(q, bio, vbr->sg + out);
>> +
>> +     sg_set_buf(&vbr->sg[num + out + in++], &vbr->status,
>> +                sizeof(vbr->status));
>> +
>> +     if (num) {
>> +             if (bio->bi_rw & REQ_WRITE) {
>> +                     vbr->out_hdr.type |= VIRTIO_BLK_T_OUT;
>> +                     out += num;
>> +             } else {
>> +                     vbr->out_hdr.type |= VIRTIO_BLK_T_IN;
>> +                     in += num;
>> +             }
>> +     }
>> +
>> +     spin_lock_irq(vblk->disk->queue->queue_lock);
>> +     if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr,
>> +                           GFP_ATOMIC) < 0) {
>> +             spin_unlock_irq(vblk->disk->queue->queue_lock);
>
> Any implications of dropping lock like that?
> E.g. for suspend. like we are still discussing with
> unlocked kick?

Since we aquired the lock in this function there should be no problem.
 Whatever protects against vblk or vblk->disk disappearing upon
entering this function also protects after unlocking queue_lock.
Otherwise all .make_request_fn() functions would be broken.

I'd still like to understand the details though.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Michael S. Tsirkin
On Mon, Jun 18, 2012 at 02:53:10PM +0800, Asias He wrote:
> +static void virtblk_make_request(struct request_queue *q, struct bio *bio)
> +{
> + struct virtio_blk *vblk = q->queuedata;
> + unsigned int num, out = 0, in = 0;
> + struct virtblk_req *vbr;
> +
> + BUG_ON(bio->bi_phys_segments + 2 > vblk->sg_elems);
> + BUG_ON(bio->bi_rw & (REQ_FLUSH | REQ_FUA));
> +
> + vbr = virtblk_alloc_req(vblk, GFP_NOIO);
> + if (!vbr) {
> + bio_endio(bio, -ENOMEM);
> + return;
> + }
> +
> + vbr->bio = bio;
> + vbr->req = NULL;
> + vbr->out_hdr.type = 0;
> + vbr->out_hdr.sector = bio->bi_sector;
> + vbr->out_hdr.ioprio = bio_prio(bio);
> +
> + sg_set_buf(&vbr->sg[out++], &vbr->out_hdr, sizeof(vbr->out_hdr));
> +
> + num = blk_bio_map_sg(q, bio, vbr->sg + out);
> +
> + sg_set_buf(&vbr->sg[num + out + in++], &vbr->status,
> +sizeof(vbr->status));
> +
> + if (num) {
> + if (bio->bi_rw & REQ_WRITE) {
> + vbr->out_hdr.type |= VIRTIO_BLK_T_OUT;
> + out += num;
> + } else {
> + vbr->out_hdr.type |= VIRTIO_BLK_T_IN;
> + in += num;
> + }
> + }
> +
> + spin_lock_irq(vblk->disk->queue->queue_lock);
> + if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr,
> +   GFP_ATOMIC) < 0) {
> + spin_unlock_irq(vblk->disk->queue->queue_lock);

Any implications of dropping lock like that?
E.g. for suspend. like we are still discussing with
unlocked kick?

> + virtblk_add_buf_wait(vblk, vbr, out, in);
> + } else {
> + virtqueue_kick(vblk->vq);

Why special case the first call? task state manipulation so expensive?

> + spin_unlock_irq(vblk->disk->queue->queue_lock);
> + }
> +}
> +
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Rusty Russell
On Mon, 18 Jun 2012 16:03:23 +0800, Asias He  wrote:
> On 06/18/2012 03:46 PM, Rusty Russell wrote:
> > On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:
> >> This patch introduces bio-based IO path for virtio-blk.
> >
> > Why make it optional?
> 
> request-based IO path is useful for users who do not want to bypass the 
> IO scheduler in guest kernel, e.g. users using spinning disk. For users 
> using fast disk device, e.g. SSD device, they can use bio-based IO path.

Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Michael S. Tsirkin
On Mon, Jun 18, 2012 at 04:03:23PM +0800, Asias He wrote:
> On 06/18/2012 03:46 PM, Rusty Russell wrote:
> >On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:
> >>This patch introduces bio-based IO path for virtio-blk.
> >
> >Why make it optional?
> 
> request-based IO path is useful for users who do not want to bypass
> the IO scheduler in guest kernel, e.g. users using spinning disk.
> For users using fast disk device, e.g. SSD device, they can use
> bio-based IO path.

OK I guess but then it should be per-device. There could be
a mix of slow and fast disks :)

> -- 
> Asias
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Stefan Hajnoczi
On Mon, Jun 18, 2012 at 7:53 AM, Asias He  wrote:
> +static void virtblk_add_buf_wait(struct virtio_blk *vblk,
> +                                struct virtblk_req *vbr,
> +                                unsigned long out,
> +                                unsigned long in)
> +{
> +       DEFINE_WAIT(wait);
> +
> +       for (;;) {
> +               prepare_to_wait_exclusive(&vblk->queue_wait, &wait,
> +                                         TASK_UNINTERRUPTIBLE);
> +
> +               spin_lock_irq(vblk->disk->queue->queue_lock);
> +               if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr,
> +                                     GFP_ATOMIC) < 0) {
> +                       spin_unlock_irq(vblk->disk->queue->queue_lock);
> +                       io_schedule();
> +               } else {
> +                       virtqueue_kick(vblk->vq);
> +                       spin_unlock_irq(vblk->disk->queue->queue_lock);
> +                       break;
> +               }
> +
> +       }

Is there a meaningful condition where we would cancel this request
(e.g. signal)?  If the vring fills up for some reason and we get stuck
here the user might wish to kill hung processes.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Asias He

On 06/18/2012 03:46 PM, Rusty Russell wrote:

On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:

This patch introduces bio-based IO path for virtio-blk.


Why make it optional?


request-based IO path is useful for users who do not want to bypass the 
IO scheduler in guest kernel, e.g. users using spinning disk. For users 
using fast disk device, e.g. SSD device, they can use bio-based IO path.


--
Asias


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-18 Thread Rusty Russell
On Mon, 18 Jun 2012 14:53:10 +0800, Asias He  wrote:
> This patch introduces bio-based IO path for virtio-blk.

Why make it optional?

Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk

2012-06-17 Thread Asias He
This patch introduces bio-based IO path for virtio-blk.

Compared to request-based IO path, bio-based IO path uses driver
provided ->make_request_fn() method to bypasses the IO scheduler. It
handles the bio to device directly without allocating a request in block
layer. This reduces the IO path in guest kernel to achieve high IOPS
and lower latency. The downside is that guest can not use the IO
scheduler to merge and sort requests. However, this is not a big problem
if the backend disk in host side uses faster disk device.

When the bio-based IO path is not enabled, virtio-blk still uses the
original request-based IO path, no performance difference is observed.

Performance evaluation:
-
Fio test is performed in a 8 vcpu guest with ramdisk based guest using
kvm tool.

Short version:
 With bio-based IO path, sequential read/write, random read/write
 IOPS boost : 28%, 24%, 21%, 16%
 Latency improvement: 32%, 17%, 21%, 16%

Long version:
 With bio-based IO path:
  seq-read  : io=2048.0MB, bw=116996KB/s, iops=233991 , runt= 17925msec
  seq-write : io=2048.0MB, bw=100829KB/s, iops=201658 , runt= 20799msec
  rand-read : io=3095.7MB, bw=112134KB/s, iops=224268 , runt= 28269msec
  rand-write: io=3095.7MB, bw=96198KB/s,  iops=192396 , runt= 32952msec
clat (usec): min=0 , max=2631.6K, avg=58716.99, stdev=191377.30
clat (usec): min=0 , max=1753.2K, avg=66423.25, stdev=81774.35
clat (usec): min=0 , max=2915.5K, avg=61685.70, stdev=120598.39
clat (usec): min=0 , max=1933.4K, avg=76935.12, stdev=96603.45
  cpu : usr=74.08%, sys=703.84%, ctx=29661403, majf=21354, minf=22460954
  cpu : usr=70.92%, sys=702.81%, ctx=77219828, majf=13980, minf=27713137
  cpu : usr=72.23%, sys=695.37%, ctx=88081059, majf=18475, minf=28177648
  cpu : usr=69.69%, sys=654.13%, ctx=145476035, majf=15867, minf=26176375
 With request-based IO path:
  seq-read  : io=2048.0MB, bw=91074KB/s, iops=182147 , runt= 23027msec
  seq-write : io=2048.0MB, bw=80725KB/s, iops=161449 , runt= 25979msec
  rand-read : io=3095.7MB, bw=92106KB/s, iops=184211 , runt= 34416msec
  rand-write: io=3095.7MB, bw=82815KB/s, iops=165630 , runt= 38277msec
clat (usec): min=0 , max=1932.4K, avg=77824.17, stdev=170339.49
clat (usec): min=0 , max=2510.2K, avg=78023.96, stdev=146949.15
clat (usec): min=0 , max=3037.2K, avg=74746.53, stdev=128498.27
clat (usec): min=0 , max=1363.4K, avg=89830.75, stdev=114279.68
  cpu : usr=53.28%, sys=724.19%, ctx=37988895, majf=17531, minf=23577622
  cpu : usr=49.03%, sys=633.20%, ctx=205935380, majf=18197, minf=27288959
  cpu : usr=55.78%, sys=722.40%, ctx=101525058, majf=19273, minf=28067082
  cpu : usr=56.55%, sys=690.83%, ctx=228205022, majf=18039, minf=26551985

How to use:
-
Add 'virtio_blk.use_bio=1' to kernel cmdline or 'modprobe virtio_blk
use_bio=1' to enable ->make_request_fn() based I/O path.

Cc: Rusty Russell 
Cc: "Michael S. Tsirkin" 
Cc: virtualizat...@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Christoph Hellwig 
Signed-off-by: Minchan Kim 
Signed-off-by: Asias He 
---
 drivers/block/virtio_blk.c |  203 +++-
 1 file changed, 163 insertions(+), 40 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 774c31d..b2d5002 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -14,6 +14,9 @@
 
 #define PART_BITS 4
 
+static bool use_bio;
+module_param(use_bio, bool, S_IRUGO);
+
 static int major;
 static DEFINE_IDA(vd_index_ida);
 
@@ -23,6 +26,7 @@ struct virtio_blk
 {
struct virtio_device *vdev;
struct virtqueue *vq;
+   wait_queue_head_t queue_wait;
 
/* The disk structure for the kernel. */
struct gendisk *disk;
@@ -51,53 +55,87 @@ struct virtio_blk
 struct virtblk_req
 {
struct request *req;
+   struct bio *bio;
struct virtio_blk_outhdr out_hdr;
struct virtio_scsi_inhdr in_hdr;
u8 status;
+   struct scatterlist sg[];
 };
 
-static void blk_done(struct virtqueue *vq)
+static inline int virtblk_result(struct virtblk_req *vbr)
+{
+   switch (vbr->status) {
+   case VIRTIO_BLK_S_OK:
+   return 0;
+   case VIRTIO_BLK_S_UNSUPP:
+   return -ENOTTY;
+   default:
+   return -EIO;
+   }
+}
+
+static inline void virtblk_request_done(struct virtio_blk *vblk,
+   struct virtblk_req *vbr)
+{
+   struct request *req = vbr->req;
+   int error = virtblk_result(vbr);
+
+   if (req->cmd_type == REQ_TYPE_BLOCK_PC) {
+   req->resid_len = vbr->in_hdr.residual;
+   req->sense_len = vbr->in_hdr.sense_len;
+   req->errors = vbr->in_hdr.errors;
+   } else if (req->cmd_type == REQ_TYPE_SPECIAL) {
+   req->errors = (error != 0);
+   }
+
+   __blk_end_request_all(req, error);
+