02.06.2010 09:44, Neo Jia wrote:
On Wed, Mar 10, 2010 at 2:12 PM, Michael Tokarev wrote:
[]
I use 32bit kvm on 64bit kernel since the day one. Nothing of interest
since that, everything just works.
I just came back to this thread because I am seeing that I can't run
VISTA 64-bit inside 64/
We only support 4 levels EPT pagetable now.
Signed-off-by: Sheng Yang
---
arch/x86/kvm/vmx.c |8 +++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 99ae513..d400fbb 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@
On Wed, Mar 10, 2010 at 2:12 PM, Michael Tokarev wrote:
> Neo Jia wrote:
>> hi,
>>
>> I have to keep a 32-bit qmeu user space to work with some legacy
>> library I have but still want to use 64-bit host Linux to explore
>> 64-bit advantage.
>>
>> So I am wondering if I can use a 32-bit qemu + 64-b
On 06/02/2010 08:29 AM, Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
On 06/02/2010 12:26 AM, Tom Lyon wrote:
I'm not really opposed to multiple devices per domain, but let me point out how
I
ended up here. First, the driver has two ways of mapping pages, one based on
t
* Avi Kivity (a...@redhat.com) wrote:
> On 06/02/2010 12:26 AM, Tom Lyon wrote:
> >
> >I'm not really opposed to multiple devices per domain, but let me point out
> >how I
> >ended up here. First, the driver has two ways of mapping pages, one based
> >on the
> >iommu api and one based on the dma
On Wed, Jun 02, 2010 at 05:51:14AM +0300, Avi Kivity wrote:
> That's definitely the long term plan. I consider Gleb's patch the
> first step.
>
> Do you have any idea how we can tackle both problems?
I recall Xen posting some solution for a similar problem:
http://lkml.org/lkml/2010/1/29/45
W
On 06/02/2010 07:59 AM, Tom Lyon wrote:
This is just what I was thinking. But rather than a get/set, just use two fds.
ioctl(vfio_fd1, VFIO_SET_DOMAIN, vfio_fd2);
This may fail if there are really 2 different IOMMUs, so user code must be
prepared for failure, In addition, this is str
On Tuesday 01 June 2010 09:29:47 pm Alex Williamson wrote:
> On Tue, 2010-06-01 at 13:28 +0300, Avi Kivity wrote:
> > On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
> > >
> > >> It can't program the iommu.
> > >> What
> > >> the patch proposes is that userspace tells vfio about the needed
> >
On Tue, 2010-06-01 at 13:28 +0300, Avi Kivity wrote:
> On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
> >
> >> It can't program the iommu.
> >> What
> >> the patch proposes is that userspace tells vfio about the needed
> >> mappings, and vfio programs the iommu.
> >>
> > There seems to b
On 06/02/2010 12:26 AM, Tom Lyon wrote:
I'm not really opposed to multiple devices per domain, but let me point out how
I
ended up here. First, the driver has two ways of mapping pages, one based on
the
iommu api and one based on the dma_map_sg api. With the latter, the system
already alloca
On 06/01/2010 08:27 PM, Andi Kleen wrote:
On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
We are running everything on NUMA (since all modern machines are now NUMA).
At what scale do the issues become observable?
On Intel platforms it's visible starting with 4 sockets.
On 06/01/2010 08:39 PM, valdis.kletni...@vt.edu wrote:
We are running everything on NUMA (since all modern machines are now
NUMA). At what scale do the issues become observable?
My 6-month-old laptop is NUMA? Comes as a surprise to me, and to the
perfectly-running NUMA=n kernel I'm runnin
Bugs item #2989366, was opened at 2010-04-19 13:47
Message generated for change (Comment added) made by sf-robot
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2989366&group_id=180599
Please note that this message will contain a full copy of the comment
On 06/01/2010 07:19 PM, Sridhar Samudrala wrote:
>> -int i;
>> +cpumask_var_t mask;
>> +int i, ret = -ENOMEM;
>> +
>> +if (!alloc_cpumask_var(&mask, GFP_KERNEL))
>> +goto out_free_mask;
>
> I think this is another bug in the error path. You should simply
> do a return i
Not that CPU hotplug currently works, but if you make the mistake of
trying it on a VM started without specifying a -cpu value, you hit
a segfault from trying to strdup(NULL) in cpu_x86_find_by_name().
Signed-off-by: Alex Williamson
---
hw/pc.c | 16
1 files changed, 8 insert
Le mardi 01 juin 2010 à 19:52 +0300, Avi Kivity a écrit :
> What I'd like to see eventually is a short-term-unfair, long-term-fair
> spinlock. Might make sense for bare metal as well. But it won't be
> easy to write.
>
This thread rings a bell here :)
Yes, ticket spinlocks are sometime slow
On Monday 31 May 2010 10:17:35 am Alan Cox wrote:
>
> Does look like it needs a locking audit, some memory and error checks
> reviewing and some further review of the ioctl security and
> overflows/trusted values.
Yes. Thanks for the detailed look.
>
> Rather a nice way of attacking the user spac
On Tuesday 01 June 2010 03:46:51 am Michael S. Tsirkin wrote:
> On Tue, Jun 01, 2010 at 01:28:48PM +0300, Avi Kivity wrote:
> > On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
> >>
> >>> It can't program the iommu.
> >>> What
> >>> the patch proposes is that userspace tells vfio about the neede
On Thu, May 27, 2010 at 04:44:12PM +0300, Avi Kivity wrote:
> Signed-off-by: Avi Kivity
> ---
> Documentation/kvm/mmu.txt | 23 +++
> 1 files changed, 23 insertions(+), 0 deletions(-)
>
> diff --git a/Documentation/kvm/mmu.txt b/Documentation/kvm/mmu.txt
> index 1e7ecdd..6a
On Mon, May 31, 2010 at 05:11:39PM +0800, Gui Jianfeng wrote:
> There's no need to calculate quadrant if tdp is enabled.
>
> Signed-off-by: Gui Jianfeng
> ---
> arch/x86/kvm/mmu.c |2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
Applied, thanks.
--
To unsubscribe from this list: s
On Fri, 2010-05-28 at 13:36 +0800, Feng Yang wrote:
> Add login_timeout parameter to make login timeout configurable.
> Currently default timeout value is 240s. It is always not enough,
> many case fail foer could not boot up in 240s in our testing.
I like the idea of unifying all login timeouts.
> Collecting the contention/usage statistics on a per spinlock
> basis seems complex. I believe a practical approximation
> to this are adaptive mutexes where upon hitting a spin
> time threshold, punt and let the scheduler reconcile fairness.
That would probably work, except: how do you get the
On 06/01/2010 08:12 PM, Marcelo Tosatti wrote:
On Mon, May 31, 2010 at 07:54:11PM +0800, Sheng Yang wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 99ae513..8649627 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -36,6 +36,8 @@
#include
#include
#include
+
Avi Kivity wrote:
> On 06/01/2010 07:38 PM, Andi Kleen wrote:
Your new code would starve again, right?
>>> Yes, of course it may starve with unfair spinlock. Since vcpus are not
>>> always running there is much smaller chance then vcpu on remote memory
>>> node will starve fo
On Tue, 01 Jun 2010 19:52:28 +0300, Avi Kivity said:
> On 06/01/2010 07:38 PM, Andi Kleen wrote:
> >>> Your new code would starve again, right?
> > Try it on a NUMA system with unfair memory.
> We are running everything on NUMA (since all modern machines are now
> NUMA). At what scale do the iss
On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
> We are running everything on NUMA (since all modern machines are now NUMA).
> At what scale do the issues become observable?
On Intel platforms it's visible starting with 4 sockets.
>
>>> I understand that reason and do not propose t
On Tue, 2010-06-01 at 11:35 +0200, Tejun Heo wrote:
> Apply the cpumask and cgroup of the initializing task to the created
> vhost worker.
>
> Based on Sridhar Samudrala's patch. Li Zefan spotted a bug in error
> path (twice), fixed (twice).
>
> Signed-off-by: Tejun Heo
> Cc: Michael S. Tsirkin
On Mon, May 31, 2010 at 07:54:11PM +0800, Sheng Yang wrote:
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 99ae513..8649627 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -36,6 +36,8 @@
> #include
> #include
> #include
> +#include
> +#include
>
> #inclu
On 06/01/2010 07:38 PM, Andi Kleen wrote:
Your new code would starve again, right?
Yes, of course it may starve with unfair spinlock. Since vcpus are not
always running there is much smaller chance then vcpu on remote memory
node will starve forever. Old kernels with unfair spinlocks ar
On 06/01/2010 06:59 PM, Guido Winkelmann wrote:
The host OS is Fedora Core 12, with qemu-kvm 0.11.0
Please try with at least qemu-kvm-0.11.1, preferably qemu-kvm-0.12.4.
Also use Linux 2.6.32.latest in the guest.
--
error compiling committee.c: too many arguments to function
--
To unsu
On Tue, Jun 01, 2010 at 07:24:14PM +0300, Gleb Natapov wrote:
> On Tue, Jun 01, 2010 at 05:53:09PM +0200, Andi Kleen wrote:
> > Gleb Natapov writes:
> > >
> > > The patch below allows to patch ticket spinlock code to behave similar to
> > > old unfair spinlock when hypervisor is detected. After pa
On Tue, Jun 01, 2010 at 05:53:09PM +0200, Andi Kleen wrote:
> Gleb Natapov writes:
> >
> > The patch below allows to patch ticket spinlock code to behave similar to
> > old unfair spinlock when hypervisor is detected. After patching unlocked
>
> The question is what happens when you have a system
Hi,
When using KVM machines with virtual disks that are hooked up to the guest via
either virtio or SCSI, the virtual disk will often hang completely after a
short time of operation. When this happens, the network connectivity of the
machine will usually go down, too. (I.e. it stops responding
Gleb Natapov writes:
>
> The patch below allows to patch ticket spinlock code to behave similar to
> old unfair spinlock when hypervisor is detected. After patching unlocked
The question is what happens when you have a system with unfair
memory and you run the hypervisor on that. There it could b
On Tue, Jun 01, 2010 at 09:05:38PM +0900, Takuya Yoshikawa wrote:
> (2010/06/01 19:55), Marcelo Tosatti wrote:
>
> >>>Sorry but I have to say that mmu_lock spin_lock problem was completely
> >>>out of
> >>>my mind. Although I looked through the code, it seems not easy to move the
> >>>set_bit_user
On Tue, Jun 01, 2010 at 05:47:08PM +0300, Michael S. Tsirkin wrote:
> Changes from v2: added padding between avail idx and flags,
> and changed virtio to only publish used index when callbacks
> are enabled.
Here's the updated spec patch.
Signed-off-by: Michael S. Tsirkin
--
diff --git a/virti
Signed-off-by: Michael S. Tsirkin
---
drivers/net/virtio_net.c |2 ++
drivers/virtio/virtio_ring.c | 15 +--
include/linux/virtio_ring.h | 10 ++
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
in
This adds an (unused) option to put available ring before control (avail
index, flags), and adds padding between index and flags. This avoids
cache line sharing between control and ring, and also makes it possible
to extend avail control without incurring extra cache misses.
Signed-off-by: Michael
On Tue, Jun 01, 2010 at 03:33:47PM +0300, Avi Kivity wrote:
> Signed-off-by: Avi Kivity
Acked-by: Glauber Costa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.ht
Changes from v2: added padding between avail idx and flags,
and changed virtio to only publish used index when callbacks
are enabled.
Here's a rewrite of the original patch with a new layout.
I haven't tested it yet so no idea how this performs, but
I think this addresses the cache bounce issue ra
On Tue, Jun 01, 2010 at 03:33:46PM +0300, Avi Kivity wrote:
> Currently all content from qemu-kvm's kvm_arch_init_vcpu().
>
> Signed-off-by: Avi Kivity
Acked-by: Glauber Costa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
On Tue, Jun 01, 2010 at 03:33:45PM +0300, Avi Kivity wrote:
> Be more similar to upstream.
>
> Signed-off-by: Avi Kivity
Acked-by: Glauber Costa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http:/
On Tue, Jun 01, 2010 at 03:33:44PM +0300, Avi Kivity wrote:
> Signed-off-by: Avi Kivity
Acked-by: Glauber Costa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.htm
On Tue, Jun 01, 2010 at 03:33:43PM +0300, Avi Kivity wrote:
> Accept a CPUState parameter instead of a kvm_context_t.
>
> Signed-off-by: Avi Kivity
Acked-by: Glauber Costa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
Mo
Avi Kivity wrote:
On 06/01/2010 05:06 PM, Peter Lieven wrote:
avi, i do not know whats going on. but if i supply -cpu xxx,-kvmclock
the guest
still uses kvm-clock, but it seems bug #584516 is not triggered...
thats weird...
I guess that bug was resolved in qemu-kvm.git. Likely 1a03675db1,
On Mon, May 31, 2010 at 06:22:21PM +0300, Michael S. Tsirkin wrote:
> On Sun, May 30, 2010 at 10:24:01PM +0200, Tejun Heo wrote:
> > Replace vhost_workqueue with per-vhost kthread. Other than callback
> > argument change from struct work_struct * to struct vhost_poll *,
> > there's no visible chan
Hi, all
After updating kernel to the latest (2.6.34), my guests would boot up.
When I connect it with VNC, it shows the following message and wouldn't
precede.
Boot from (hd0,0) ext3 f98 (uuid)
starting up ...
these guest is installed with 'vmbuilder' script and used to boot up without
On 06/01/2010 05:06 PM, Peter Lieven wrote:
avi, i do not know whats going on. but if i supply -cpu xxx,-kvmclock
the guest
still uses kvm-clock, but it seems bug #584516 is not triggered...
thats weird...
I guess that bug was resolved in qemu-kvm.git. Likely 1a03675db1, but
it appears to
Avi Kivity wrote:
On 06/01/2010 04:57 PM, Peter Lieven wrote:
Avi Kivity wrote:
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a
linux guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting kvm
Signed-off-by: Asias He
---
kvm/test/config-x86-common.mak |5 +++--
kvm/test/config-x86_64.mak |3 +--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kvm/test/config-x86-common.mak b/kvm/test/config-x86-common.mak
index 9084e2d..38dbf5a 100644
--- a/kvm/test/config-x8
On 06/01/2010 04:57 PM, Peter Lieven wrote:
Avi Kivity wrote:
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a
linux guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting kvm-clock until
bug #
Avi Kivity wrote:
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a linux
guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting kvm-clock until
bug #584516
is fixed without modifying all guest
Hi, all
After updating kernel to the latest (2.6.34), my guests would boot up.
When I connect it with VNC, it shows the following message and wouldn't
precede.
Boot from (hd0,0) ext3 f98 (uuid)
starting up ...
these guest is installed with 'vmbuilder' script and used to boot up without
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a linux
guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting kvm-clock until bug
#584516
is fixed without modifying all guest systems and reverti
* Chris Wright (chr...@redhat.com) wrote:
> Please send in any agenda items you are interested in covering.
>
> If we have a lack of agenda items I'll cancel the week's call.
No, agenda, so this week's call is cancelled.
thanks,
-chris
--
To unsubscribe from this list: send the line "unsubscribe
hi,
is it possible to avoid detection of clocksource=kvm_clock in a linux
guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting kvm-clock until bug
#584516
is fixed without modifying all guest systems and reverting that later.
thanks,
peter
--
To unsubsc
On 06/01/2010 01:46 PM, Michael S. Tsirkin wrote:
Since vfio would be the only driver, there would be no duplication. But
a separate object for the iommu mapping is a good thing. Perhaps we can
even share it with vhost (without actually using the mmu, since vhost is
software only).
Mai
On 06/01/2010 01:55 PM, Mohammed Gamal wrote:
On Tue, Jun 1, 2010 at 11:59 AM, Avi Kivity wrote:
On 05/31/2010 10:40 PM, Mohammed Gamal wrote:
This patch address bug report in
https://bugs.launchpad.net/qemu/+bug/530077.
Failed vmentries were handled with handle_unhandled() which pr
On 06/01/2010 02:59 PM, Steven Rostedt wrote:
One concern is performance. Traces tend to be long, and running python
code on each line will be slow.
If trace-cmd integrates a pager and a search mechanism that looks at the
binary data instead of the text, we could format only the lines that ar
Signed-off-by: Avi Kivity
---
qemu-kvm-x86.c| 104 -
target-i386/kvm.c |8 +---
2 files changed, 2 insertions(+), 110 deletions(-)
diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
index 853d50e..3c33e64 100644
--- a/qemu-kvm-x86.c
+++ b/qe
This patchset converts kvm_arch_vcpu_init()'s cpuid handling bits to use
upstream code.
Avi Kivity (5):
Make get_para_features() similar to upstream
Use get_para_features() from upstream
Rename kvm_arch_vcpu_init()s cenv argument to env
Use skeleton of upstream's kvm_arch_init_vcpu()
Use
Be more similar to upstream.
Signed-off-by: Avi Kivity
---
qemu-kvm-x86.c | 42 +-
1 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
index 3b9be6d..f5c76bc 100644
--- a/qemu-kvm-x86.c
+++ b/qemu-kvm-x86.c
@@
Signed-off-by: Avi Kivity
---
qemu-kvm-x86.c| 28
target-i386/kvm.c |4 ++--
2 files changed, 2 insertions(+), 30 deletions(-)
diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
index 0eb4060..3b9be6d 100644
--- a/qemu-kvm-x86.c
+++ b/qemu-kvm-x86.c
@@ -1116,34 +
Accept a CPUState parameter instead of a kvm_context_t.
Signed-off-by: Avi Kivity
---
qemu-kvm-x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
index 95b7aa5..0eb4060 100644
--- a/qemu-kvm-x86.c
+++ b/qemu-kvm-x86.c
@@ -1132,7 +113
Currently all content from qemu-kvm's kvm_arch_init_vcpu().
Signed-off-by: Avi Kivity
---
qemu-kvm-x86.c|2 +-
target-i386/kvm.c | 17 +++--
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
index f5c76bc..853d50e 100644
--- a/q
This patch adds a file that documents the usage of KVM-specific
MSRs.
[ v2: added comments from Randy ]
[ v3: added comments from Avi ]
[ v4: added information about wallclock alignment ]
Signed-off-by: Glauber Costa
Reviewed-by: Randy Dunlap
---
Documentation/kvm/msr.txt | 153 ++
Kevin Wolf wrote:
Am 01.06.2010 12:59, schrieb Peter Lieven:
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper/i
(2010/06/01 19:55), Marcelo Tosatti wrote:
Sorry but I have to say that mmu_lock spin_lock problem was completely
out of
my mind. Although I looked through the code, it seems not easy to move the
set_bit_user to outside of spinlock section without breaking the
semantics of
its protection.
So th
On Tue, 2010-06-01 at 11:39 +0300, Avi Kivity wrote:
> On 05/30/2010 06:34 PM, Steven Rostedt wrote:
> >
> >> Cool. May make sense to use simpler formatting in the kernel, and use
> >> trace-cmd plugins for the complicated cases.
> >>
> >> It does raise issues with ABIs. Can trace-cmd read plugin
Kevin Wolf wrote:
Am 01.06.2010 12:59, schrieb Peter Lieven:
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper/i
Kevin Wolf wrote:
Am 01.06.2010 12:59, schrieb Peter Lieven:
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper/i
Am 01.06.2010 12:59, schrieb Peter Lieven:
> Hi,
>
> I just compiled latest git to work on Bug #585113 .
>
> Unfortunately, I can't start the the VMs with the device mappings
> generated by our multipath
> setup.
>
> cmdline:
> /usr/bin/qemu-kvm-devel -net none -drive
> file=/dev/mapper/iqn.
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper/iqn.2001-05.com.equallogic:0-8a0906-88961b105-19f000e7e7d4beaa-test
Hello,
On 06/01/2010 12:17 PM, Michael S. Tsirkin wrote:
> Something that I wanted to figure out - what happens if the
> CPU mask limits us to a certain CPU that subsequently goes offline?
The thread gets unbound during the last steps of cpu offlining.
> Will e.g. flush block forever or until th
On Mon, May 24, 2010 at 04:05:29PM +0900, Takuya Yoshikawa wrote:
> (2010/05/17 18:06), Takuya Yoshikawa wrote:
> >
> >>User allocated bitmaps have the advantage of reducing pinned memory.
> >>However we have plenty more pinned memory allocated in memory slots, so
> >>by itself, user allocated bitm
On Tue, Jun 1, 2010 at 11:59 AM, Avi Kivity wrote:
> On 05/31/2010 10:40 PM, Mohammed Gamal wrote:
>>
>> This patch address bug report in
>> https://bugs.launchpad.net/qemu/+bug/530077.
>>
>> Failed vmentries were handled with handle_unhandled() which prints a
>> rather
>> unfriendly message to th
On Tue, Jun 01, 2010 at 01:28:48PM +0300, Avi Kivity wrote:
> On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
>>
>>> It can't program the iommu.
>>> What
>>> the patch proposes is that userspace tells vfio about the needed
>>> mappings, and vfio programs the iommu.
>>>
>> There seems to b
On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
It can't program the iommu.
What
the patch proposes is that userspace tells vfio about the needed
mappings, and vfio programs the iommu.
There seems to be some misunderstanding. The userspace interface
proposed forces a separate domain
On Tue, Jun 01, 2010 at 11:35:15AM +0200, Tejun Heo wrote:
> Apply the cpumask and cgroup of the initializing task to the created
> vhost worker.
>
> Based on Sridhar Samudrala's patch. Li Zefan spotted a bug in error
> path (twice), fixed (twice).
>
> Signed-off-by: Tejun Heo
> Cc: Michael S.
On Tue, Jun 01, 2010 at 11:10:45AM +0300, Avi Kivity wrote:
> On 05/31/2010 08:10 PM, Michael S. Tsirkin wrote:
>> On Mon, May 31, 2010 at 02:50:29PM +0300, Avi Kivity wrote:
>>
>>> On 05/30/2010 05:53 PM, Michael S. Tsirkin wrote:
>>>
So what I suggested is failing any kind of acces
Ticket lock spinlock ensures fairness by introducing FIFO of cpus waiting
for spinlock to be released. This works great on real HW, but when running
on hypervisor it introduce very heavy performance hit if physical cpus
are overcommitted (up to 35% in my test). The reason for performance
hit is tha
Apply the cpumask and cgroup of the initializing task to the created
vhost worker.
Based on Sridhar Samudrala's patch. Li Zefan spotted a bug in error
path (twice), fixed (twice).
Signed-off-by: Tejun Heo
Cc: Michael S. Tsirkin
Cc: Sridhar Samudrala
Cc: Li Zefan
---
drivers/vhost/vhost.c |
From: Sridhar Samudrala
Add a new kernel API to attach a task to current task's cgroup
in all the active hierarchies.
Signed-off-by: Sridhar Samudrala
Reviewed-by: Paul Menage
Acked-by: Li Zefan
---
include/linux/cgroup.h |1 +
kernel/cgroup.c| 23 +++
2 fil
Replace vhost_workqueue with per-vhost kthread. Other than callback
argument change from struct work_struct * to struct vhost_work *,
there's no visible change to vhost_poll_*() interface.
This conversion is to make each vhost use a dedicated kthread so that
resource control via cgroup can be app
On 01.06.2010, at 10:36, Andreas Schwab wrote:
> Paul Mackerras writes:
>
>> I re-read the relevant part of the PowerPC architecture spec
>> yesterday, and it seems pretty clear that the FPSCR doesn't affect the
>> behaviour of lfs and stfs, and is not affected by them. So in fact 4
>> out of
On 06/01/2010 12:00 PM, Sheng Yang wrote:
On Tuesday 01 June 2010 16:51:05 Avi Kivity wrote:
On 05/31/2010 02:17 PM, Sheng Yang wrote:
Only test legal action so far, we can extend it later.
The legal actions are tested by guests, so it's more important for unit
tests to check
On Tuesday 01 June 2010 16:51:05 Avi Kivity wrote:
> On 05/31/2010 02:17 PM, Sheng Yang wrote:
> > Only test legal action so far, we can extend it later.
>
> The legal actions are tested by guests, so it's more important for unit
> tests to check illegal (and potentially subversive) actions.
Yes.
On 05/31/2010 10:40 PM, Mohammed Gamal wrote:
This patch address bug report in https://bugs.launchpad.net/qemu/+bug/530077.
Failed vmentries were handled with handle_unhandled() which prints a rather
unfriendly message to the user. This patch separates handling vmentry failures
from unknown exit
On 05/31/2010 02:17 PM, Sheng Yang wrote:
Only test legal action so far, we can extend it later.
The legal actions are tested by guests, so it's more important for unit
tests to check illegal (and potentially subversive) actions.
+
+void test_xsave()
+{
+ unsigned int cr4;
+
Hi Christian,
Am 31.05.2010 21:31, schrieb Christian Brunner:
> Hi Kevin,
>
> here is an updated patch for the ceph/rbd driver. I hope that everything
> is fine now.
I'll try to get to give it a final review later this week. In the
meantime, I would be happy to see another review by someone els
On 05/30/2010 06:34 PM, Steven Rostedt wrote:
Cool. May make sense to use simpler formatting in the kernel, and use
trace-cmd plugins for the complicated cases.
It does raise issues with ABIs. Can trace-cmd read plugins from
/lib/modules/*? We can then distribute the plugins with the kernel
Paul Mackerras writes:
> I re-read the relevant part of the PowerPC architecture spec
> yesterday, and it seems pretty clear that the FPSCR doesn't affect the
> behaviour of lfs and stfs, and is not affected by them. So in fact 4
> out of the 7 instructions in each of those procedures are unnece
On 05/31/2010 02:54 PM, Sheng Yang wrote:
From: Dexuan Cui
This patch enable guest to use XSAVE/XRSTOR instructions.
We assume that host_xcr0 would use all possible bits that OS supported.
And we loaded xcr0 in the same way we handled fpu - do it as late as we can.
Reviewed-by: Avi Kivi
On 05/31/2010 08:10 PM, Michael S. Tsirkin wrote:
On Mon, May 31, 2010 at 02:50:29PM +0300, Avi Kivity wrote:
On 05/30/2010 05:53 PM, Michael S. Tsirkin wrote:
So what I suggested is failing any kind of access until iommu
is assigned.
So, the kernel driver must be aware of t
On 06/01/2010 05:38 AM, Xiao Guangrong wrote:
Avi Kivity wrote:
On 05/31/2010 05:00 AM, Xiao Guangrong wrote:
+
+#define for_each_gfn_indirect_sp(kvm, sp, gfn, pos, n)\
+ hlist_for_each_entry_safe(sp, pos, n,\
+&kvm->arch.mmu_page_
On 06/01/2010 05:29 AM, Xiao Guangrong wrote:
How about passing the list as a parameter to prepare() and commit()? If
the lifetime of the list is just prepare/commit, it shouldn't be a global.
Does below example code show your meaning correctly?
+ struct list_head free_list = LI
96 matches
Mail list logo