On 06/10/2010 08:33 AM, Dor Laor wrote:
On 06/09/2010 01:31 PM, Gordan Bobic wrote:
On 06/09/2010 09:56 AM, Paolo Bonzini wrote:
Or is this too crazy an idea?
It should work. Note that the the malloced memory should be aligned in
order to get better sharing.
Within glibc malloc large
On 06/10/2010 08:44 AM, Jes Sorensen wrote:
On 06/08/10 20:43, Gordan Bobic wrote:
Is this plausible?
I'm trying to work out if it's even worth considering this approach to
enable all memory used by in a system to be open to KSM page merging,
rather than only memory used by specific programs
On 06/09/2010 09:56 AM, Paolo Bonzini wrote:
Or is this too crazy an idea?
It should work. Note that the the malloced memory should be aligned in
order to get better sharing.
Within glibc malloc large blocks are mmaped, so they are automatically
aligned. Effective sharing of small blocks
Is this plausible?
I'm trying to work out if it's even worth considering this approach to
enable all memory used by in a system to be open to KSM page merging,
rather than only memory used by specific programs aware of it (e.g.
kvm/qemu).
Something like this would address the fact that
Troels Arvin wrote:
Hello,
I'm conducting some performancetests with KVM-virtualized CentOSes. One
thing I noticed is that guest I/O performance seems to be significantly
better for virtio-based block devices (drives) if the cache=none
argument is used. (This was with a rather powerful
Hi,
Is it possible to use KSM:
1) Without hardware VT support
2) For all memory in a system, without patching all applications to
register with it
TIA.
Gordan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo
Chris Wright wrote:
2) For all memory in a system, without patching all applications to
register with it
No.
Right now, an app must be modified to call madvise(MADV_MERGEABLE).
Further, the core scanning loop that ksmd performs is based on per-process
virtual memory regions rather than
john cooper wrote:
The test is building the Linux kernel (only taking the second run to give the
test the benefit of local cache):
make clean; make -j8 all; make clean; sync; time make -j8 all
This takes about 10 minutes with IDE disk emulation and about 13 minutes with virtio. I
ran the
Dor Laor wrote:
On 11/14/2009 04:23 PM, Gordan Bobic wrote:
I just tried paravirtualized virtio block devices, and my tests show
that they are approximately 30% slower than emulated IDE devices. I'm
guessing this isn't normal. Is this a known issue or am I likely to have
mosconfigured something
Thomas Fjellstrom wrote:
On Sun November 15 2009, Neil Aggarwal wrote:
The Core i7 has hyperthreading, so you see 8 logical CPUs.
Are you saying the AMD processors do not have hyperthreading?
Course not. Hyperthreading is dubious at best.
That's a rather questionable answer to a rather
Dor Laor wrote:
On 11/15/2009 02:00 PM, Gordan Bobic wrote:
Dor Laor wrote:
On 11/14/2009 04:23 PM, Gordan Bobic wrote:
I just tried paravirtualized virtio block devices, and my tests show
that they are approximately 30% slower than emulated IDE devices. I'm
guessing this isn't normal
Thomas Fjellstrom wrote:
The Core i7 has hyperthreading, so you see 8 logical CPUs.
Are you saying the AMD processors do not have hyperthreading?
Course not. Hyperthreading is dubious at best.
That's a rather questionable answer to a rather broad issue. SMT is
useful, especially on
I just tried paravirtualized virtio block devices, and my tests show
that they are approximately 30% slower than emulated IDE devices. I'm
guessing this isn't normal. Is this a known issue or am I likely to have
mosconfigured something? I'm using 64-bit RHEL/CentOS 5 (both host and
guest).
On Tue, 18 Aug 2009 13:02:18 +0100, Armindo Silva deathon2l...@gmail.com
wrote:
There's a patch for qemu:
http://qemu-forum.ipi.fi/viewtopic.php?t=2984
Interesting, and along the lines of exactly what I was after (including the
opengl32.dll win32 library). But that thread is from 2+ years
On Fri, 14 Aug 2009 14:38:52 +0200, Flemming Frandsen
flemming.frand...@stibo.com wrote:
I'm having some problems getting kvm to exit when the guest OS has
halted.
Specifically I'm running CentOS 5.2 as the guest on ubuntu 8.1.'
I've noticed that 32 bit windows xp and 64 bit ubuntu 9.10 can
Is OpenGL Acceleration based on the host's OpenGL capability available
in KVM?
Thanks.
Gordan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
With the recent talk of the trim SATA instruction becoming supported in
the upcoming versions of Windows and claims from Intel that support for
it in their SSDs is imminent, it occurs to me that this would be equally
useful in virtual disk emulation.
Since the disk image is a sparse file, it
17 matches
Mail list logo