Jeremy Fitzhardinge wrote:
> On 09/09/09 16:34, Anthony Liguori wrote:
>
>> We haven't even been successful in getting the Xen folks to present
>> their work on lkml before shipping it to their users. Why would we
>> expect more from VMware if we'
Christoph Hellwig wrote:
> On Wed, Sep 09, 2009 at 05:12:26PM -0500, Anthony Liguori wrote:
>
>> Alok Kataria wrote:
>>
>>> I see your point, but the ring logic or the ABI that we use to
>>> communicate between the hypervisor and guest is not shared
rtainly
discuss and would be valid concerns. That said, I don't think it's a
huge change to your current patch and I don't see any obvious problems
it would cause.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualiza
u32 msgProdIdx;
> + u32 msgConsIdx;
> + u32 msgNumEntriesLog2;
> +} __packed PVSCSIRingsState;
All of this can be hidden behind a struct virtqueue. You could then
introduce a virtio-vmwring that implemented this ABI.
You could then separate out the actual sc
available as normal
> kernel interfaces.
It may be possible to make vmdq appear like an sr-iov capable device
from userspace. sr-iov provides the userspace interfaces to allocate
interfaces and assign mac addresses. To make it useful, you
ification to add a new device in QEMU. If we add a
>> new device everytime we encounter a less than ideal interface within a
>> guest, we're going to end up having hundreds of devices.
>>
>
> I just find this argument funny.
>
I'm finding this disc
Amit Shah wrote:
> On (Mon) Aug 31 2009 [09:21:13], Anthony Liguori wrote:
>
>> Amit Shah wrote:
>>
>>> Can you please explain your rationale for being so rigid about merging
>>> the two drivers?
>>>
>>>
>> Because
drivers because of
peculiarities of hvc then hvc needs to be fixed. It has nothing to do
with the driver ABI which is what qemu cares about.
Regards,
Anthony Liguori
> Amit
>
___
Virtualization mailing list
Virtualization@l
> handled with a spinlock held.
>
> Compare this with the entire write handled in one system call in the
> current method.
>
Does it matter? This isn't a fast path.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
as to be a spinlock, just
> because writes can be called from irq context.
>
I don't see a problem here.
Regards,
Anthony Liguori
> Amit
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
There's a fixed number of possible
entries on the ring. Preallocate them up front and then you don't need
to sleep.
> A few solutions:
> - Keep things as they are, virtio_console.c remains as it is and
> virtio_serial.c gets added
>
Not an option from a QEMU perspe
ying. If that's the case, we don't need
any of the slot management code in vhost.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
a working implementation that demonstrates the
userspace interface is sufficient. Once it goes into the upstream
kernel, we need to have backwards compatibility code in QEMU forever to
support that kernel version.
Regards,
Anthony Liguori
___
I avoided suggested ring proxying because I didn't want to suggest that
merging should be contingent on it.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
model for live migration IMHO.
I think so more thorough benchmarking would be good too. In particular,
netperf/iperf runs would be nice.
Regards,
Anthony Liguori
> Thanks very much,
>
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
that location.
>
> Doesn't sound like this is going to be backward compatible ...
>
> Also I still think passing a 'protocol' string for each port is a good
> idea, so you can stick that into a sysfs file for guests use
Amit Shah wrote:
> On (Mon) Aug 10 2009 [11:59:31], Anthony Liguori wrote:
>
>> However, as I've mentioned repeatedly, the reason I won't merge
>> virtio-serial is that it duplicates functionality with virtio-console.
>> If the two are converged, I'm
done between two guests. See
http://article.gmane.org/gmane.linux.kernel.virtualization/5423
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
be limited to real hardware.
We may on day use vhost as the basis of a driver domain. There's quite
a lot of interest in this for networking.
At any rate, I'd like to see performance results before we consider
trying to reuse virtio code.
Regards,
Anthony Liguori
_
>
Is it really that difficult? I think it would be better to just do that.
I wonder though if mmu notifiers can be used to make it transparent...
Regards,
Anthony Liguori
>> Regards,
>>
>> Anthony Liguori
>>
>>
___
isn't present in the
userspace virtio-net. I think this requires some thought.
>> In this case, it's two separate implementations of the same device. I
>> think it makes sense for them to be separate devices.
>>
>> Regards,
>>
>> Anthony Ligu
Michael S. Tsirkin wrote:
> On Mon, Aug 10, 2009 at 05:35:13PM -0500, Anthony Liguori wrote:
>
>> What I'm saying is that virtio-blk-pci, which is the qdev instantiation
>> of virtio-pci + virtio-blk, should be able to have a set of qdev
>> properties that is co
t in a distributed environment because you may
have folks using your code before you've gotten an official hand-out.
A better discovery mechanism is based on something that piggy backs on
another authority. For instance, reverse fully qualified domains work
well. uuid's tend t
Michael S. Tsirkin wrote:
> On Mon, Aug 10, 2009 at 03:33:59PM -0500, Anthony Liguori wrote:
>
>> There ought to be a way to layer qdev properties that achieves this goal
>> so that when you create a virtio-pci-block device, you have the ability
>> to turn off in
sted.
>
Any rough idea on performance? Better or worse than userspace?
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
ing in qemu like we have in the kernel so that you can easily
add a new ring backend type. At any rate, see if you can achieve the
same goal with qdev properties. If you could, you should be able to
hack something up easily to disable this for vhost wit
o idea what the ring implementation is that
it sits on top of.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
Anthony Liguori wrote:
>
> There is nothing sane about vmchannel. It's just an attempt to bypass
> QEMU which is going to introduce all sorts of complexities wrt
> migration, guest compatibility, etc.
>
> However, as I've mentioned repeatedly, the reason I won'
are converged, I'm happy to merge it. I'm not opposed to
having more functionality.
I think it's the wrong solution for the use-case, and I always have, but
that's independent of my willingness to merge it.
Regards,
Anthony Liguori
___
kernel.
Adding new kernel drivers breaks support for enterprise Linux distros.
Adding a userspace daemon does not. Windows device drivers require
signing which is very difficult to do. There's a huge practical
advantage in not requiring guest drivers.
Regards,
Anthony Liguori
>
Gerd Hoffmann wrote:
> On 08/10/09 15:02, Anthony Liguori wrote:
>
>> I think you're missing my fundamental point. Don't use the kernel as the
>> guest interface.
>>
>> Introduce a userspace daemon that exposes a domain socket. Then we can
>> have
Gerd Hoffmann wrote:
> On 08/10/09 15:02, Anthony Liguori wrote:
>
>> I think you're missing my fundamental point. Don't use the kernel as the
>> guest interface.
>>
>> Introduce a userspace daemon that exposes a domain socket. Then we can
>> have
Don't use the kernel as
the guest interface.
Introduce a userspace daemon that exposes a domain socket. Then we can
have a proper protocol that uses reverse fqdns for identification.
We can do the backend over TCP/IP, usb, standard serial, etc.
Regards,
Anthony Liguori
__
domain sockets, sys v ipc, whatever floats your boat.
And, you can build this daemon today using the existing vmchannel over
TCP/IP. You could also make it support serial devices. We could also
introduce a custom usb device and use libusb. libusb is portable to
Windows and Linux.
So we g
Amit Shah wrote:
> On (Thu) Aug 06 2009 [08:29:40], Anthony Liguori wrote:
>
>> Amit Shah wrote:
>>
>>> Sure; but there's been no resistance from anyone from including the
>>> virtio-serial device driver so maybe we don't need to discuss that
Amit Shah wrote:
> Sure; but there's been no resistance from anyone from including the
> virtio-serial device driver so maybe we don't need to discuss that.
>
There certainly is from me. The userspace interface is not reasonable
for guest applications to use.
Regard
Jamie Lokier wrote:
> Anthony Liguori wrote:
>
>> Richard W.M. Jones wrote:
>> Have you considered using a usb serial device? Something attractive
>> about it is that a productid/vendorid can be specified which means that
>> you can use that as a method of enum
Richard W.M. Jones wrote:
> On Mon, Jul 27, 2009 at 06:44:28PM -0500, Anthony Liguori wrote:
>
>> It really suggests that you need _one_ vmchannel that's exposed to
>> userspace with a single userspace daemon that consumes it.
>>
>
> ... or a more flexi
Richard W.M. Jones wrote:
> On Tue, Jul 28, 2009 at 09:48:00AM -0500, Anthony Liguori wrote:
>
>> Dave Miller nacked that approach with a sledgehammer instead preferring
>> that we just use standard TCP/IP which is what led to the current
>> implementation using slir
Richard W.M. Jones wrote:
> On Mon, Jul 27, 2009 at 06:44:28PM -0500, Anthony Liguori wrote:
>
>> It really suggests that you need _one_ vmchannel that's exposed to
>> userspace with a single userspace daemon that consumes it.
>>
>
> ... or a more flexi
rgument for using a higher level kernel interface especially one that
doesn't meet the requirements of the interface.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
ve a single daemon that serves vmchannel sessions, that
daemon can make it transparent whether the session is going over
/dev/ttyS0, a network device, /dev/hvc1, etc.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.
ns /dev/vmch3 directly, when you
switch users, how do you forcefully disconnect user foo from /dev/vmch3
so that user bad can start using it?
Regards,
Anthony Liguori
> Daniel
>
___
Virtualization mailing list
Virtualization@lists.linux-found
ving multiple channels since your daemon
can multiplex..
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
or do you want me to respin the series
> with this minor fix?
>
It's already changed in staging.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
ned-off-by: Michael S. Tsirkin
This series introduces warning (virtio_load decl/def does not match).
--
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
Avi Kivity wrote:
> On 06/15/2009 09:12 PM, Anthony Liguori wrote:
>>
>> 2) Whenever the default machine type changes in a guest-visible way,
>> introduce a new machine type
>
> s/whenever/qemu stable release/
>
>> - Use explicit versions in name: pc-v1,
addr=target.lun". I prefer the
later form but I think either would be acceptable.
2) Whenever the default machine type changes in a guest-visible way,
introduce a new machine type
- Use explicit versions in name: pc-v1, pc-v2 or use more descriptive
names pc-with-usb
- Easily transition
to be annoying and error-prone. Some
sanity could be added by using addressing prefixes like addr=pci:00:01.0
or addr=scsi:0.3 but I'll leave that up to whoever takes this on.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@
part of the
discussion.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
using something more opaque like that is that it
simplifies things for management tools as they don't have to keep track
of "capabilities" that we're adding. Heck, you could even do:
pc-0034
Where "pc-%08x" % (capabilities) :-)
Regards,
Anthony Liguori
___
nsistent and easier to implement. Basically, when adding a device to
it's parent, you hand the parent the "addr" field and that lets you say
where you want to sit on the bus.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virt
t;> I'm merely advocating that we want to let QEMU make the decision.
>>
>
> The allocation code could be moved out into a library, and libvirt could
> link with it (ducks).
>
Why does libvirt want to do allocation?
Regards,
Anthony Liguori
___
#x27;auto'
I'm not at all arguing against pci_addr. I'm arguing about how libvirt
should use it with respect to the "genesis" use-case where libvirt has
no specific reason to choose one PCI slot over another. In that case,
I'm merely advocating that we want
Mark McLoughlin wrote:
> On Mon, 2009-06-15 at 07:48 -0500, Anthony Liguori wrote:
>
>>> Eventually the
>>> default configuration becomes increasingly unusable and you need a new
>>> baseline. You must still be able to fall back to the old baseline for
>&
Avi Kivity wrote:
> On 06/15/2009 04:23 PM, Anthony Liguori wrote:
>
> How would qemu know which slots to optimize for?
>
> In practice, I don't see that as a real problem. We should (a) add an
> ioapic and four more pci links (b) recommend that slots be assigned in
Avi Kivity wrote:
> On 06/15/2009 04:20 PM, Anthony Liguori wrote:
>> It's not at all that simple. SCSI has a hierarchical address
>> mechanism with 0-7 targets but then potentially multiple LUNs per
>> target. Today, we always emulate a single LUN per target bu
Avi Kivity wrote:
> On 06/15/2009 03:52 PM, Anthony Liguori wrote:
>> Avi Kivity wrote:
>>> On 06/15/2009 03:41 PM, Michael S. Tsirkin wrote:
>>>> We should just tell the user which slots are open.
>>>> This might be tricky if the config is passed
nfig file world regardless
of whether that's a few months from now or a decade :-)
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
Avi Kivity wrote:
> On 06/15/2009 03:45 PM, Anthony Liguori wrote:
>>>> This last option makes sense to me: in a real world the user has
>>>> control over where he places the device on the bus, so why
>>>> not with qemu?
>>>
>>> Yes, the
does the user care?
Let QEMU allocate the PCI slot, then query it to see what slot it
assigned and remember that.
It's not a good idea to have management applications attempt to do PCI
slot allocation. For instance, one day we may decide to make virtio
device
and
>> out-of-tree patches.
>>
>
> Yup.
>
> I got bit-rotten patches for pci_addr=, and I can unrot them if they're
> wanted.
>
Yes, would be good to have patches on the list to discuss. In
principle, I have no objection to this.
Regards,
Anthony Liguori
t to prevent incompatibilities, you need to make everything
> new (potentially including bugfixes) non-default. Eventually the
> default configuration becomes increasingly unusable and you need a new
> baseline. You must still be able to fall back to the old baseline for
> olde
rive file=bar.img,controller=blah,index=1
>
> Drives to not have pci addresses.
Drivers don't have indexes and buses but we specify it on the -drive
line. -drive is convenient syntax. It stops being convenient when you
force it to be two options.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
It's also clear lack of stable PCI
> addresses hurts us now.
Is there opposition? I don't ever recall seeing a patch...
I think it's a perfectly fine idea.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
to the symbolic name.
libvirt should really never worry about the machine config file for
normal things unless it needs to change what devices are exposed to a guest.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
Mark McLoughlin wrote:
> On Fri, 2009-06-12 at 09:55 -0500, Anthony Liguori wrote:
>
>> Mark McLoughlin wrote:
>>
>>> On Wed, 2009-06-10 at 20:27 +0100, Jamie Lokier wrote:
>>>
>>> = Solution - Separate configuration from compat hints =
>
Mark McLoughlin wrote:
> On Fri, 2009-06-12 at 09:51 -0500, Anthony Liguori wrote:
>
>> Mark McLoughlin wrote:
>>
>>> On Wed, 2009-06-10 at 20:27 +0100, Jamie Lokier wrote:
>>>
>>>
>>>> Michael S. Tsirkin wrote:
>>>
e the savevm format than a config file
>
How is compat hints different from a device tree?
In my mind, that's what compat hints is. I don't see another sane way
to implement it.
Regards,
Anthony Liguori
___
Virtualization ma
nd add it, rather than just generate an entirely new
> config
>
What's the problem with parsing the device config and modifying it? Is
it just complexity?
If we provided a mechanism to simplify manipulating a device config,
would that eliminate the concern here?
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
te it was in was a mistake.
Paul, can you put together a TODO so that we know all of the things that
have regressed so we can get things back into shape?
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
igned-off-by: Michael S. Tsirkin
>
Acked-by: Anthony Liguori
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
ur impression of how much work would be to get this going on
top of upstream QEMU?
I'm willing to borrow a few cycles to help out here. I'd really like to
see this series go in via QEMU if possible.
Regards,
Anthony Liguori
> Michael S. Tsirkin (2):
> qemu-kvm: add MSI-X
; }
>
> and then find_vq as usual.
>
Is it possible to just delay the msix enablement until after the queues
have been finalized (IOW in virtio-pci.c:vp_finalize_features)?
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtuali
number of vectors upfront.
>
> Signed-off-by: Michael S. Tsirkin
>
Do you have userspace patches for testing?
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foun
(similar to
hugetlbfs or ramfs).
Using PCI BARs implies static shared memory mappings. For a long
running VM, you're likely to want to support dynamic shared memory mappings.
Also exposing a simple signaling mechanism with this too would allow for
shared ring que
David Miller wrote:
> From: Anthony Liguori
> Date: Mon, 15 Dec 2008 17:01:14 -0600
>
>
>> No, TCP falls under the not simple category because it requires the
>> backend to have access to a TCP/IP stack.
>>
>
> I'm at a loss for words if you nee
Jeremy Fitzhardinge wrote:
> Anthony Liguori wrote:
>>
>> That seems unnecessarily complex.
>>
>
> Well, the simplest thing is to let the host TCP stack do TCP. Could
> you go into more detail about why you'd want to avoid that?
The KVM model is that a gue
David Miller wrote:
> From: Anthony Liguori
> Date: Mon, 15 Dec 2008 14:44:26 -0600
>
>
>> We want this communication mechanism to be simple and reliable as we
>> want to implement the backends drivers in the host userspace with
>> minimum mess.
>>
&
David Miller wrote:
> From: Anthony Liguori
> Date: Mon, 15 Dec 2008 09:02:23 -0600
>
>
>> There is already an AF_IUCV for s390.
>>
>
> This is a scarecrow and irrelevant to this discussion.
>
> And this is exactly why I asked that any arguments
t userspace. This may be an X graphics driver, a mouse
driver, copy/paste, remote shutdown, etc.
A socket seems like a natural choice. If that's wrong, then we can
explore other options (like a char device, virtual fs, etc.). This
shouldn't be confused with networking though and
e. Note, this is not a new concept. There is
already an AF_IUCV for s390. VMware is also developing an AF_VMCI
socket family.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
Andi Kleen wrote:
> Anthony Liguori <[EMAIL PROTECTED]> writes:
>> What we would rather do in KVM, is have the VFs appear in the host as
>> standard network devices. We would then like to back our existing PV
>> driver to this VF directly bypassing the host networki
complexity, let's migration Just
Work, and should give the same level of performance.
Regards,
Anthony Liguori
> Rumor has it, there is some Xen code floating around to support this
> already, is that true?
>
> thanks,
>
> greg k-h
> --
> To unsubscribe from this list
the associated baggage of doing hardware passthrough.
So IMHO, having VFs be usable in the host is absolutely critical because
I think it's the only reasonable usage model.
Regards,
Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization
extend the analogy and actually create controllable
permissions that could be used to control who can talk to who. You
could even create a synthetic filesystem in the guest that could mount
this namespace allowing very sophisticated enumeration/permission
control. Thi
Gleb Natapov wrote:
> On Tue, Oct 14, 2008 at 01:16:19PM -0500, Anthony Liguori wrote:
>
>> One thing that's been discussed is to use something that looked much
>>
> Where is has been discussed? Was it on a public mailing list with online
> archive?
>
Gleb Natapov wrote:
> On Tue, Oct 14, 2008 at 08:50:48AM -0500, Anthony Liguori wrote:
>
>> Gleb Natapov wrote:
>>
>>> On Mon, Oct 13, 2008 at 01:32:35PM -0500, Anthony Liguori wrote:
>>>
>>>
>>> netlink was designed to
not good enough because the host may disable certain features.
Perhaps the header size is whatever the longest element that has been
commonly negotiated?
So that's why this aggressive check is here. Not to necessarily cement
this into the ABI but as a way
Zachary Amsden wrote:
> On Wed, 2008-10-01 at 14:34 -0700, Anthony Liguori wrote:
>
>> Jeremy Fitzhardinge wrote:
>>
>>> Alok Kataria wrote:
>>>
>>> I guess, but the bulk of the uses of this stuff are going to be
>>> hypervisor-sp
terface
because the TSC frequency can change any time a guest is entered. It
really should be a shared memory area so that a guest doesn't have to
vmexit to read it (like it is with the Xen/KVM paravirt clock).
Regards,
Anthony Liguori
> In general, if a hypervisor is going t
Chris Wright wrote:
> * Anthony Liguori ([EMAIL PROTECTED]) wrote:
>
>> We've already gone down the road of trying to make standard paravirtual
>> interfaces (via virtio). No one was sufficiently interested in
>> collaborating. I don't see why othe
ure bits. Most of the
stuff that's interesting is stored in shared memory because a guest can
read that without taking a vmexit or via a hypercall.
We can all agree upon a common mechanism for doing something but if no
one is using that mechanism to do anything significant, what purpose
pace in a certain way.
We've already gone down the road of trying to make standard paravirtual
interfaces (via virtio). No one was sufficiently interested in
collaborating. I don't see why other paravirtualizations are going to
be much different.
Regards,
Anthony Liguori
__
Jeremy Fitzhardinge wrote:
> Anthony Liguori wrote:
>> Mmm, cpuid bikeshedding :-)
>
> My shade of blue is better.
>
>>> The space 0x4000-0x40ff is reserved for hypervisor usage.
>>>
>>> This region is divided into 16 16-leaf blocks. Each blo
keshedding :-)
> The space 0x4000-0x40ff is reserved for hypervisor usage.
>
> This region is divided into 16 16-leaf blocks. Each block has the
> structure:
>
> 0x40x0:
> eax: max used leaf within the leaf block (max 0x40xf)
Why even bot
for Windows.
Would probably want to implement extensions to the 9p protocol to
support this too. And mapping file system semantics between Windows and
Unix is hugely complicated. In all honesty, CIFS over virtio-net is a
better solution since Samba has already done the hard work of getting
tree too as it will cause an unexpected OOM when ballooning.
Signed-off-by: Anthony Liguori <[EMAIL PROTECTED]>
diff --git a/drivers/virtio/virtio_balloon.c
b/drivers/virtio/virtio_balloon.c
index bfef604..62eab43 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_bal
ge positive number.
This handles the case where v < vb->num_pages and ensures we get a small,
negative, s64 as the result.
Rusty: please push this for 2.6.27-rc4. It's probably appropriate for the
stable tree too as it will cause an unexpected OOM when ballooning.
Signed-off-by: A
Rusty Russell wrote:
> On Thursday 26 June 2008 05:07:18 Anthony Liguori wrote:
>
>> Rusty Russell wrote:
>>
>>> @@ -1563,6 +1561,16 @@ static void setup_tun_net(char *arg)
>>> /* Tell Guest what MAC address to use. */
>>> add_feature(de
101 - 200 of 363 matches
Mail list logo