Gerd Hoffmann wrote:
Anthony Liguori wrote:
Gerd Hoffmann wrote:
Hi,
I really want to use readv/writev though. With virtio, we get a
scatter/gather list for each IO request.
Yep, I've also missed pwritev (or whatever that syscall would be named).
Once I post the virt
Anthony Liguori wrote:
> Gerd Hoffmann wrote:
Hi,
> I really want to use readv/writev though. With virtio, we get a
> scatter/gather list for each IO request.
Yep, I've also missed pwritev (or whatever that syscall would be named).
> Once I post the virtio-blk driver, I'll follow up a little
Gerd Hoffmann wrote:
Anthony Liguori wrote:
IMHO it would be a much better idea to kill the aio interface altogether
and instead make the block drivers reentrant. Then you can use
(multiple) posix threads to run the I/O async if you want.
Threads are a poor substitute for a proper AI
Anthony Liguori wrote:
>> IMHO it would be a much better idea to kill the aio interface altogether
>> and instead make the block drivers reentrant. Then you can use
>> (multiple) posix threads to run the I/O async if you want.
>
> Threads are a poor substitute for a proper AIO interface. linux-a
Le mardi 04 décembre 2007 à 13:49 +0100, Gerd Hoffmann a écrit :
> Anthony Liguori wrote:
> > I have a patch that uses linux-aio for the virtio-blk driver I'll be
> > posting tomorrow and I'm extremely happy with the results. In recent
> > kernels, you can use an eventfd interface along with linux
Anthony Liguori wrote:
> I have a patch that uses linux-aio for the virtio-blk driver I'll be
> posting tomorrow and I'm extremely happy with the results. In recent
> kernels, you can use an eventfd interface along with linux-aio so that
> polling is unnecessary.
Which kernel version is "recent"?
Le lundi 03 décembre 2007 à 19:16 +, Paul Brook a écrit :
> > Yes, librt is providing posix-aio, and librt coming with GNU libc uses
> > threads.
> > But if I remember correctly librt coming with RHEL uses a mix of threads
> > and linux kernel AIO (you can have a look to the .srpm of libc).
> >
Gerd Hoffmann wrote:
Hi,
BTW, if everyone thinks it could be a good idea I can port block-raw.c
to use linux kernel AIO (without removing POSIX AIO support, of course)
IMHO it would be a much better idea to kill the aio interface altogether
and instead make the block drivers reentra
Paul Brook wrote:
Yes, librt is providing posix-aio, and librt coming with GNU libc uses
threads.
But if I remember correctly librt coming with RHEL uses a mix of threads
and linux kernel AIO (you can have a look to the .srpm of libc).
BTW, if everyone thinks it could be a good idea I can port b
Gerd Hoffmann, le Mon 03 Dec 2007 22:13:07 +0100, a écrit :
> > BTW, if everyone thinks it could be a good idea I can port block-raw.c
> > to use linux kernel AIO (without removing POSIX AIO support, of course)
>
> IMHO it would be a much better idea to kill the aio interface altogether
> and inst
Hi,
> BTW, if everyone thinks it could be a good idea I can port block-raw.c
> to use linux kernel AIO (without removing POSIX AIO support, of course)
IMHO it would be a much better idea to kill the aio interface altogether
and instead make the block drivers reentrant. Then you can use
(multip
Paul Brook, le Mon 03 Dec 2007 15:39:48 +, a écrit :
> I think host caching is still useful enough to be enabled by default, and
> provides a significant performance increase in several cases.
>
> - The guest typically has a relatively small quantity of RAM, compared to a
> modern machine.
> Yes, librt is providing posix-aio, and librt coming with GNU libc uses
> threads.
> But if I remember correctly librt coming with RHEL uses a mix of threads
> and linux kernel AIO (you can have a look to the .srpm of libc).
>
> BTW, if everyone thinks it could be a good idea I can port block-raw.
> Well, let's separate a few things. QEMU uses posix-aio which uses
> threads and normal read/write operations. It also limits the number of
> threads that aio uses to 1 which effectively makes everything
> synchronous anyway.
This is a bug. Allegedly this is to workaround an old broken glibc, s
Le lundi 03 décembre 2007 à 12:06 -0600, Anthony Liguori a écrit :
> Samuel Thibault wrote:
> > Anthony Liguori, le Mon 03 Dec 2007 09:54:47 -0600, a écrit :
> >
> >> Have you done any performance testing? Buffered IO should absolutely
> >> beat direct IO simply because buffered IO allows writ
Le lundi 03 décembre 2007 à 09:54 -0600, Anthony Liguori a écrit :
> Laurent Vivier wrote:
> > Le lundi 03 décembre 2007 à 11:23 +0100, Fabrice Bellard a écrit :
> >
> >> Laurent Vivier wrote:
> >>
> >>> This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
> >>> by remo
Anthony Liguori wrote:
> >With the IDE emulation, when the emulated "disk write cache" flag is
> >on it may be reasonable to report a write as completed when the AIO is
> >dispatched, without waiting for the AIO to complete.
> >
> >An IDE flush cache command would wait for all outstanding write AI
Jamie Lokier wrote:
Paul Brook wrote:
On Monday 03 December 2007, Samuel Thibault wrote:
Anthony Liguori, le Mon 03 Dec 2007 09:54:47 -0600, a écrit :
Have you done any performance testing? Buffered IO should absolutely
beat direct IO simply because buffered IO allows writes to
Samuel Thibault wrote:
Anthony Liguori, le Mon 03 Dec 2007 09:54:47 -0600, a écrit :
Have you done any performance testing? Buffered IO should absolutely
beat direct IO simply because buffered IO allows writes to complete
before they actually hit disk.
Since qemu can use the aio inte
Paul Brook wrote:
> On Monday 03 December 2007, Samuel Thibault wrote:
> > Anthony Liguori, le Mon 03 Dec 2007 09:54:47 -0600, a écrit :
> > > Have you done any performance testing? Buffered IO should absolutely
> > > beat direct IO simply because buffered IO allows writes to complete
> > > before
On Monday 03 December 2007, Samuel Thibault wrote:
> Anthony Liguori, le Mon 03 Dec 2007 09:54:47 -0600, a écrit :
> > Have you done any performance testing? Buffered IO should absolutely
> > beat direct IO simply because buffered IO allows writes to complete
> > before they actually hit disk.
>
>
Anthony Liguori, le Mon 03 Dec 2007 09:54:47 -0600, a écrit :
> Have you done any performance testing? Buffered IO should absolutely
> beat direct IO simply because buffered IO allows writes to complete
> before they actually hit disk.
Since qemu can use the aio interface, that shouldn't matter
Laurent Vivier wrote:
Le lundi 03 décembre 2007 à 11:23 +0100, Fabrice Bellard a écrit :
Laurent Vivier wrote:
This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
by removing the buffer used in the IDE emulation.
---
block.c | 10 +++
block.h |2
bl
On Monday 03 December 2007, Markus Hitter wrote:
> Am 03.12.2007 um 11:30 schrieb Laurent Vivier:
> > But if you think I should remove the buffered case, I can.
>
> In doubt, less code is always better. For the unlikely case you broke
> something badly, there's always the option to take back the pa
Am 03.12.2007 um 11:30 schrieb Laurent Vivier:
But if you think I should remove the buffered case, I can.
In doubt, less code is always better. For the unlikely case you broke
something badly, there's always the option to take back the patch.
BTW, do you think I should enable "cache=off"
Hi,
On Mon, 3 Dec 2007, Fabrice Bellard wrote:
> Laurent Vivier wrote:
> > This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
> > by removing the buffer used in the IDE emulation.
> > ---
> > block.c | 10 +++
> > block.h |2 block_int.h |1 cpu-all.h |
Le lundi 03 décembre 2007 à 11:23 +0100, Fabrice Bellard a écrit :
> Laurent Vivier wrote:
> > This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
> > by removing the buffer used in the IDE emulation.
> > ---
> > block.c | 10 +++
> > block.h |2
> > block_int.
Laurent Vivier wrote:
This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
by removing the buffer used in the IDE emulation.
---
block.c | 10 +++
block.h |2
block_int.h |1
cpu-all.h |1
exec.c | 19 ++
hw/ide.c| 176
This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
by removing the buffer used in the IDE emulation.
---
block.c | 10 +++
block.h |2
block_int.h |1
cpu-all.h |1
exec.c | 19 ++
hw/ide.c| 176
29 matches
Mail list logo