Non-blocking disk IO now works for any type of disk image, not just
raw format. There is no longer any format-specific code in the patch:
http://people.brandeis.edu/~jcoiner/qemu_idedma/qemu_dma_patch.html
You might want this patch if:
* you run a multitasking guest OS,
* you access a disk
I'd argue that it should be -tap or -tuntap instead of -tun, since using
tun would confuse users who knew the distinction between tun devices and tap
devices.
Ok for -tuntap long option. Can I propose -t for a short option ?
I'm not sure if a -vde option is necessary or a good idea, though.
On Mon, Oct 03, 2005 at 11:46:57AM +0200, Jean-Christian de Rivaz wrote:
Ok for -tuntap long option. Can I propose -t for a short option ?
Makes sense.
The idea of the -vde option is to have in parameter the VDE socket
(default to /tmp/vde.ctl) an act like vde_plug so it don't need any
On Mon, 3 Oct 2005, Jean-Christian de Rivaz wrote:
The idea of the -vde option is to have in parameter the VDE socket (default
to /tmp/vde.ctl) an act like vde_plug so it don't need any other code to
work. Just start a vde_switch and as many qemu -vde you wants to create a
complete virtual
On Sun, 2 Oct 2005, Jim C. Brown wrote:
So while we're at it, we should redesign the interface for qemu. For each nic,
we'd have -net type[,macaddr] where type is tap or user or dummy and
the macaddr is an (optional) parameter that replaces -macaddr. Number of nics
would depend on number of
Jim C. Brown a écrit :
The idea of the -vde option is to have in parameter the VDE socket
(default to /tmp/vde.ctl) an act like vde_plug so it don't need any
other code to work. Just start a vde_switch and as many qemu -vde
you wants to create a complete virtual nework (1 switch and n hosts).
On Sun, 2 Oct 2005, Jean-Christian de Rivaz wrote:
It's already the case with at least my proposed patch. I have't tested the
patch written by Henrik Nordstrom or Lars Munch but it's likly that there
work the same way since this feature come from the Linux kernel tun code.
Indeed. It is
Magnus,
I don't think the Windows 2000 install hack will ever be obsolete.
The installer assumes that a hard disk will take nonzero time to read
some data. QEMU always services a read in zero-guest-time. (With the
nonblocking IO patch, zero-guest-time reads still occur, when the
requested
I managed to make it work (qemu+non blocking IO on windows host),
with a rough estimation of 10% speed increase at the early stage of
windows setup. I expect more once windows installs itself in true
multitasking mode. :)
What you need to do is:
- download pthreads-w32-2-6-0-release.tar.gz and
Henrik Nordstrom a écrit :
On Mon, 3 Oct 2005, Jean-Christian de Rivaz wrote:
The idea of the -vde option is to have in parameter the VDE socket
(default to /tmp/vde.ctl) an act like vde_plug so it don't need any
other code to work. Just start a vde_switch and as many qemu -vde
you wants to
Henrik Nordstrom a écrit :
On Sun, 2 Oct 2005, Jean-Christian de Rivaz wrote:
It's already the case with at least my proposed patch. I have't tested
the patch written by Henrik Nordstrom or Lars Munch but it's likly
that there work the same way since this feature come from the Linux
kernel
On Mon, Oct 03 2005, John Coiner wrote:
Non-blocking disk IO now works for any type of disk image, not just
raw format. There is no longer any format-specific code in the patch:
http://people.brandeis.edu/~jcoiner/qemu_idedma/qemu_dma_patch.html
You might want this patch if:
* you run
On Mon, Oct 03, 2005 at 03:07:02PM +0200, Henrik Nordstrom wrote:
On Sun, 2 Oct 2005, Jim C. Brown wrote:
Only objection is that for the tunfd case I would use
-net tap,fd=10,macaddr=...
Since it doesn't have to be a tap device, how about this?
-net socket,fd=10,macaddr=...
For
On Mon, Oct 03, 2005 at 03:01:08PM +0200, Henrik Nordstrom wrote:
On Sun, 2 Oct 2005, Jean-Christian de Rivaz wrote:
In fact, if qemu supported both these things, then I don't see a reason
for
-tun-fd at all (except for something like VDE).
Agree, and a -vde option will go forward in this
On Mon, Oct 03, 2005 at 02:54:42PM +0200, Henrik Nordstrom wrote:
On Sun, 2 Oct 2005, Jim C. Brown wrote:
What it really boils down to is cleaning up the command line options for
the
network interface(s), which up to now have been added in a hackish,
piece-wise
manner.
And persistent
On Mon, 3 Oct 2005, octane indice wrote:
And what about a full IP connection beetween hosts? In order to
simulate a real network to do nfs/smtp/http/smb and so on?
This is done with a TUN/TAP device, or via a VDE switch using TUN/TAP to
connect to the host.
Regards
Henrik
Hi,
Sorry for the lack of comment. I mostly use the 'user-net' networking so
I never bothered much about TUN/TAP.
What I can say is that the '-net xxx' option will be implemented to
solve the existing issues. My only concern is about ensuring backward
compatibility (if no one needs it then
Thank you for the patch. I'll look at it ASAP. If there is no regression
(in terms of bugs and performance) I will include it soon: I consider
that non blocking I/O is now a critical feature in QEMU in order to get
better performances.
Fabrice.
John Coiner wrote:
Non-blocking disk IO now
Christian MICHON a écrit :
to do so, does that mean we would need to launch a 1st qemu
instance which would contain the dhcp server, and next qemu
instances would connect to it ?
if so, 'qemu -server' and 'qemu -client -connect_to server' could
be useful...
As I understand and with what I
Hi!
What worked with a 4.11 guest I couldnt get to work with a
6.0beta4 guest: running qemu inside qemu. without kqemu loaded in
both the host and the guest qemu doesnt properly start up in the
guest, top in the guest showing the cpu spending most of its time
servicing interrupts (but none
On Mon, Oct 03, 2005 at 08:29:13PM +0200, Fabrice Bellard wrote:
Hi,
Sorry for the lack of comment. I mostly use the 'user-net' networking so
I never bothered much about TUN/TAP.
What I can say is that the '-net xxx' option will be implemented to
solve the existing issues. My only
On Mon, Oct 03, 2005 at 05:55:58PM +0200, octane indice wrote:
And what about a full IP connection beetween hosts? In order to
simulate a real network to do nfs/smtp/http/smb and so on?
I was thinking of a sort of a net-server which handles the DHCP
process, the connection for going outside
Jens Axboe wrote:
Why not use aio for this instead, seems like a better fit than spawning
a thread per block device? That would still require a thread for
handling completions, but you could easily just use a single completion
thread for all devices for this as it would not need to do any
On Mon, Oct 03, 2005 at 08:41:46AM -0400, John Coiner wrote:
Magnus,
I don't think the Windows 2000 install hack will ever be obsolete.
The installer assumes that a hard disk will take nonzero time to read
some data. QEMU always services a read in zero-guest-time. (With the
Troy Benjegerdes wrote:
I am also haveing trouble getting a fresh win2k install under qemu to actually
be able to run windows update.
I had to download and install Win2k SP4, then Win2k SP4 Hotfixes, and
also an IE6 upgrade, before windows update ran.
-john
On Mon, 3 Oct 2005, Troy Benjegerdes wrote:
Why is there a posix async IO 'standard', but a different linux AIO standard?
Because the Posix AIO sucks quite badly both to implement and use.
But all Linuxes I know of have a Posix AIO library. Some even have a Posix
AIO library using Linux
Why is there a posix async IO 'standard', but a different linux AIO standard?
Linux has a Posix AIO library that sits on top of linux's native AIO:
http://www.bullopensource.org/posix/
It's pretty new, version 0.6. We might not want qemu to depend on it,
because most distros probably
En réponse à Jim C. Brown [EMAIL PROTECTED] :
On Mon, Oct 03, 2005 at 05:55:58PM +0200, octane indice wrote:
And what about a full IP connection beetween hosts? In order
to simulate a real network to do nfs/smtp/http/smb and so on?
I was thinking of a sort of a net-server which handles
If you anyway plan on having Posix AIO support then go for the Posix AIO
interface. The performance reasons why Linux AIO exists is very unlikely
to be an issue to qemu as you need to be quite I/O intensive to see any
performance difference.
Ideally, we should be able to use a Posix AIO
29 matches
Mail list logo