On Tue, Jan 08, 2008 at 09:42:13AM -0600, Anthony Liguori wrote:
> Instead of allocating a node for each page, you could use page->private
page->lru is probably better for this so splice still works
etc... (the struct page isn't visible to the guest VM so it's free to
use)
___
Dor Laor wrote:
On Wed, 2008-01-09 at 08:29 -0600, Anthony Liguori wrote:
Dor Laor wrote:
Now that we have a host timer based tx wakeup it waits for 64
packets or timeout before processing them.
This might cause the guest to run out of tx buffers while the host
holds them up.
There's a prop
On Wed, 2008-01-09 at 08:29 -0600, Anthony Liguori wrote:
> Dor Laor wrote:
> > Now that we have a host timer based tx wakeup it waits for 64
> > packets or timeout before processing them.
> > This might cause the guest to run out of tx buffers while the host
> > holds them up.
> >
>
> There's
Glauber de Oliveira Costa wrote:
That said, if acpi is really the preference here, and this patches
have chance, no problem. But it will take me a little more time to
implement them ;-)
The power button support that was recently added at least proves that
the host->guest notification path w
Dor Laor wrote:
Now that we have a host timer based tx wakeup it waits for 64
packets or timeout before processing them.
This might cause the guest to run out of tx buffers while the host
holds them up.
There's a proper fix that Rusty added last night.
This is a temporal solution to quickl
Christian Borntraeger wrote:
Am Mittwoch, 9. Januar 2008 schrieb Glauber de Oliveira Costa:
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci
device.
I personally prefer to use non paravirtualiz
Avi Kivity wrote:
Glauber de Oliveira Costa wrote:
I'm sending a first draft of my proposed cpu hotplug driver for
kvm/virtio
The first patch is the kernel module, while the second, the userspace
pci device.
The host boots with the maximum cpus it should ever use, through the
-smp parameter.
Now that we have a host timer based tx wakeup it waits for 64
packets or timeout before processing them.
This might cause the guest to run out of tx buffers while the host
holds them up.
This is a temporal solution to quickly bring back performance to 800mbps.
But a better fix will soon be sent (i
Am Mittwoch, 9. Januar 2008 schrieb Glauber de Oliveira Costa:
> I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
> The first patch is the kernel module, while the second, the userspace pci
device.
I personally prefer to use non paravirtualized cpu hotplug and implement
On Wed, Jan 09, 2008 at 11:06:21AM +0100, Andrea Arcangeli wrote:
> On Tue, Jan 08, 2008 at 09:42:13AM -0600, Anthony Liguori wrote:
> > Instead of allocating a node for each page, you could use page->private
>
> page->lru is probably better for this so splice still works
> etc... (the struct pag
Glauber de Oliveira Costa wrote:
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci
device.
The host boots with the maximum cpus it should ever use, through the -smp
parameter.
Due to real machine
Signed-off-by: Glauber de Oliveira Costa <[EMAIL PROTECTED]>
---
qemu/Makefile.target |2 +-
qemu/hw/pc.c |4 +-
qemu/hw/pc.h |3 +
qemu/hw/virtio-hotplug.c | 111 ++
qemu/monitor.c | 11 +
qemu/qe
Signed-off-by: Glauber de Oliveira Costa <[EMAIL PROTECTED]>
---
drivers/virtio/Kconfig |6 +
drivers/virtio/Makefile |1 +
drivers/virtio/virtio_cpu.c | 226 +++
drivers/virtio/virtio_pci.c |1 +
kernel/cpu.c|2 +
5
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci
device.
The host boots with the maximum cpus it should ever use, through the -smp
parameter.
Due to real machine constraints (which qemu copies), i
14 matches
Mail list logo