On 10/01/2012 03:21 PM, Anthony Liguori wrote:
Christoph Hellwig h...@infradead.org writes:
Does anyone know when the kvm forum schedule for this year will be
published?
It should be published soon.
Here is an early bird look at the schedule. It might contain tiny typos
but we rather
On 09/27/2012 11:49 AM, Raghavendra K T wrote:
On 09/25/2012 08:30 PM, Dor Laor wrote:
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn a new thread each
time you write to a /sys/ entry.
Each such a thread spins over a
On 09/23/2012 04:06 AM, Marcelo Tosatti wrote:
On Fri, Sep 21, 2012 at 11:30:31PM +0300, Dor Laor wrote:
On 09/21/2012 05:51 AM, Marcelo Tosatti wrote:
On Fri, Sep 21, 2012 at 12:02:46AM +0300, Dor Laor wrote:
On 09/12/2012 06:39 PM, Marcelo Tosatti wrote:
HW TSC scaling is a feature
On 09/21/2012 05:51 AM, Marcelo Tosatti wrote:
On Fri, Sep 21, 2012 at 12:02:46AM +0300, Dor Laor wrote:
On 09/12/2012 06:39 PM, Marcelo Tosatti wrote:
HW TSC scaling is a feature of AMD processors that allows a
multiplier to be specified to the TSC frequency exposed to the guest.
KVM also
On 09/12/2012 06:39 PM, Marcelo Tosatti wrote:
HW TSC scaling is a feature of AMD processors that allows a
multiplier to be specified to the TSC frequency exposed to the guest.
KVM also contains provision to trap TSC (KVM: Infrastructure for
software and hardware based TSC rate scaling
On 07/24/2012 07:55 AM, Rusty Russell wrote:
On Mon, 23 Jul 2012 22:32:39 +0200, Sasha Levin levinsasha...@gmail.com wrote:
As it was discussed recently, there's currently no way for the guest to notify
the host about panics. Further more, there's no reasonable way to notify the
host of other
On 07/24/2012 03:30 PM, Sasha Levin wrote:
On 07/24/2012 10:26 AM, Dor Laor wrote:
On 07/24/2012 07:55 AM, Rusty Russell wrote:
On Mon, 23 Jul 2012 22:32:39 +0200, Sasha Levin levinsasha...@gmail.com wrote:
As it was discussed recently, there's currently no way for the guest to notify
On 06/19/2012 06:42 PM, Chegu Vinod wrote:
Hello,
Wanted to share some preliminary data from live migration experiments on a setup
that is perhaps one of the larger ones.
We used Juan's huge_memory patches (without the separate migration thread) and
measured the total migration time and the
On 06/19/2012 08:22 PM, Michael Roth wrote:
On Tue, Jun 19, 2012 at 11:34:42PM +0900, Takuya Yoshikawa wrote:
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori anth...@codemonkey.ws wrote:
I'm not at all convinced that postcopy is a good idea. There needs a clear
expression of what the
On 07/03/2012 05:22 PM, Ronen Hod wrote:
On 06/18/2012 02:14 PM, Dor Laor wrote:
On 06/18/2012 01:05 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 16:03:23 +0800, Asias Heas...@redhat.com wrote:
On 06/18/2012 03:46 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 14:53:10 +0800, Asias Heas
On 06/20/2012 07:46 AM, Asias He wrote:
On 06/19/2012 02:21 PM, Dor Laor wrote:
On 06/19/2012 05:51 AM, Asias He wrote:
On 06/18/2012 07:39 PM, Sasha Levin wrote:
On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:
On 06/18/2012 01:05 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 16:03:23
On 06/19/2012 05:51 AM, Asias He wrote:
On 06/18/2012 07:39 PM, Sasha Levin wrote:
On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:
On 06/18/2012 01:05 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 16:03:23 +0800, Asias Heas...@redhat.com wrote:
On 06/18/2012 03:46 PM, Rusty Russell wrote
On 06/18/2012 01:05 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 16:03:23 +0800, Asias Heas...@redhat.com wrote:
On 06/18/2012 03:46 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 14:53:10 +0800, Asias Heas...@redhat.com wrote:
This patch introduces bio-based IO path for virtio-blk.
Why
On 06/04/2012 04:38 PM, Isaku Yamahata wrote:
On Mon, Jun 04, 2012 at 08:37:04PM +0800, Anthony Liguori wrote:
On 06/04/2012 05:57 PM, Isaku Yamahata wrote:
After the long time, we have v2. This is qemu part.
The linux kernel part is sent separatedly.
Changes v1 - v2:
- split up patches for
On 05/16/2012 02:50 AM, Greg KH wrote:
On Tue, May 15, 2012 at 08:06:57AM -0700, Andrew Stiegmann (stieg) wrote:
In an effort to improve the out-of-the-box experience with Linux
kernels for VMware users, VMware is working on readying the Virtual
Machine Communication Interface (vmw_vmci) and
FYI (mea culpa for semi-spamming): On early June LinuxCon Japan will
host a virtualization track w/ some known faces and a parallel oVirt
developer workshop in Japan.
More details:
https://events.linuxfoundation.org/events/linuxcon-japan/schedule
Regards,
Dor
Original Message
On 04/04/2012 01:37 PM, Michael Roth wrote:
On Apr 4, 2012 2:42 AM, Paolo Bonzini pbonz...@redhat.com
mailto:pbonz...@redhat.com wrote:
Il 04/04/2012 03:18, Michael Roth ha scritto:
Attacking the IDL/schema side first is the more rationale approach.
From
there we can potentially
On 04/04/2012 02:52 PM, Anthony Liguori wrote:
On 04/04/2012 05:53 AM, Dor Laor wrote:
On 04/04/2012 01:37 PM, Michael Roth wrote:
On Apr 4, 2012 2:42 AM, Paolo Bonzini pbonz...@redhat.com
mailto:pbonz...@redhat.com wrote:
Il 04/04/2012 03:18, Michael Roth ha scritto:
Attacking the IDL
On 04/03/2012 05:43 PM, Markus Armbruster wrote:
I'm afraid my notes are rather rough...
* 1.1
soft freeze apr 15th (less than two weeks)
hard freeze may 1
three months cycle for 1.2
stable machine types only every few releases? pc-next
* Maintainers, got distracted and my notes
On 02/13/2012 02:40 PM, Nicholas A. Bellinger wrote:
Hi Dor, James Co,
On Mon, 2012-02-13 at 09:57 +0200, Dor Laor wrote:
On 02/13/2012 09:05 AM, Christian Borntraeger wrote:
On 12/02/12 21:16, James Bottomley wrote:
Well, no-one's yet answered the question I had about why.
Just to give
On 02/13/2012 09:05 AM, Christian Borntraeger wrote:
On 12/02/12 21:16, James Bottomley wrote:
Well, no-one's yet answered the question I had about why.
Just to give one example from a different angle:
In the big datacenters tape libraries are still very important, and lots
of them have a
On 01/03/2012 10:33 AM, Stefan Hajnoczi wrote:
On Mon, Jan 02, 2012 at 01:09:40PM +0100, Juan Quintela wrote:
Please send in any agenda items you are interested in covering.
Status of virtio drivers for Windows:
* Unsupported in community today
Why?
* Bugs languish on bug
On 01/03/2012 05:48 PM, Anthony Liguori wrote:
On 01/01/2012 04:16 AM, Dor Laor wrote:
On 12/29/2011 06:16 PM, Anthony Liguori wrote:
On 12/29/2011 10:07 AM, Dor Laor wrote:
On 12/26/2011 11:05 AM, Avi Kivity wrote:
On 12/26/2011 05:14 AM, Nikunj A Dadhania wrote:
btw you can get
On 01/04/2012 12:45 AM, Anthony Liguori wrote:
When using 'guests-pick', we initially present the most compatible
network model (rtl8139, for instance). We would provide a paravirtual
channel (guest-agent?) that could be used to enumerate which models were
available and let guest decide which
On 01/01/2012 04:01 PM, Ronen Hod wrote:
On 01/01/2012 12:16 PM, Dor Laor wrote:
On 12/29/2011 06:16 PM, Anthony Liguori wrote:
On 12/29/2011 10:07 AM, Dor Laor wrote:
On 12/26/2011 11:05 AM, Avi Kivity wrote:
On 12/26/2011 05:14 AM, Nikunj A Dadhania wrote:
btw you can get an additional
On 12/30/2011 12:39 AM, Anthony Liguori wrote:
On 12/28/2011 07:25 PM, Isaku Yamahata wrote:
Intro
=
This patch series implements postcopy live migration.[1]
As discussed at KVM forum 2011, dedicated character device is used for
distributed shared memory between migration source and
On 12/29/2011 06:16 PM, Anthony Liguori wrote:
On 12/29/2011 10:07 AM, Dor Laor wrote:
On 12/26/2011 11:05 AM, Avi Kivity wrote:
On 12/26/2011 05:14 AM, Nikunj A Dadhania wrote:
btw you can get an additional speedup by enabling x2apic, for
default_send_IPI_mask_logical().
In the host
On 01/01/2012 06:45 PM, Stefan Hajnoczi wrote:
On Thu, Dec 22, 2011 at 11:41 PM, Minchan Kimminc...@kernel.org wrote:
On Thu, Dec 22, 2011 at 12:57:40PM +, Stefan Hajnoczi wrote:
On Wed, Dec 21, 2011 at 1:00 AM, Minchan Kimminc...@kernel.org wrote:
If you're stumped by the performance
On 12/26/2011 11:05 AM, Avi Kivity wrote:
On 12/26/2011 05:14 AM, Nikunj A Dadhania wrote:
btw you can get an additional speedup by enabling x2apic, for
default_send_IPI_mask_logical().
In the host?
In the host, for the guest:
qemu -cpu ...,+x2apic
It seems to me that we should
On 12/19/2011 07:59 PM, Amit Shah wrote:
On (Mon) 19 Dec 2011 [14:59:36], Avi Kivity wrote:
On 12/19/2011 02:52 PM, Amit Shah wrote:
(snip)
S4 needs some treatment, though, as resume after s4 doesn't work with
kvmclock enabled. I didn't realise this series was only handling the
soft lockup
On 12/07/2011 04:41 PM, Avi Kivity wrote:
On 12/05/2011 10:18 PM, Eric B Munson wrote:
Changes from V4:
Rename KVM_GUEST_PAUSED to KVMCLOCK_GUEST_PAUSED
Add description of KVMCLOCK_GUEST_PAUSED ioctl to api.txt
Changes from V3:
Include CC's on patch 3
Drop clear flag ioctl and have the
On 11/07/2011 03:45 PM, Juan Quintela wrote:
Hi
Please send in any agenda items you are interested in covering.
Null agenda - null call, hope to get some next week.
Thanks, Juan.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
http://www.ovirt.org/ is a complete set of open source management code,
specific for qemu/kvm. It is based on Red Hat's RHEV-M code which is our
enterprise ready solution for VMs (now java based, no windows, thanks
for asking). Beyond plenty of Red Hat developers, the project got the
support
On 10/25/2011 03:05 PM, Anthony Liguori wrote:
On 10/25/2011 07:35 AM, Kevin Wolf wrote:
Am 24.10.2011 13:35, schrieb Paolo Bonzini:
On 10/24/2011 01:04 PM, Juan Quintela wrote:
Hi
Please send in any agenda items you are interested in covering.
- What's left to merge for 1.0.
I would
On 10/02/2011 09:24 AM, Mulyadi Santosa wrote:
Hi. :)
On Sat, Oct 1, 2011 at 19:16, Onkar N Mahajankern...@gmail.com wrote:
Compiled 3.1.0-rc3+ from source (see attached config file) and updated the
host(fc14) kernel ;
So host is now running 3.1.0-rc3+
Now I also want to try to boot
On 09/15/2011 12:14 PM, Jun Koi wrote:
On Thu, Sep 15, 2011 at 5:05 PM, Sasha Levinlevinsasha...@gmail.com wrote:
On Thu, 2011-09-15 at 17:04 +0800, Jun Koi wrote:
hi,
i run kvm with -cpu core2duo option, but /proc/cpuinfo only shows
SSE and SSE2.
my host is Core i7, so i suppose that i
On 08/08/2011 06:24 AM, Isaku Yamahata wrote:
This mail is on Yabusame: Postcopy Live Migration for Qemu/KVM
on which we'll give a talk at KVM-forum.
The purpose of this mail is to letting developers know it in advance
so that we can get better feedback on its design/implementation approach
On 08/08/2011 01:59 PM, Nadav Har'El wrote:
* What's is postcopy livemigration
It is is yet another live migration mechanism for Qemu/KVM, which
implements the migration technique known as postcopy or lazy
migration. Just after the migrate command is invoked, the execution
host of a VM is
On 08/08/2011 03:32 PM, Anthony Liguori wrote:
On 08/08/2011 04:20 AM, Dor Laor wrote:
On 08/08/2011 06:24 AM, Isaku Yamahata wrote:
This mail is on Yabusame: Postcopy Live Migration for Qemu/KVM
on which we'll give a talk at KVM-forum.
The purpose of this mail is to letting developers know
On 08/08/2011 06:59 PM, Anthony Liguori wrote:
On 08/08/2011 10:36 AM, Avi Kivity wrote:
On 08/08/2011 06:29 PM, Anthony Liguori wrote:
- Efficient, reduce needed traffic no need to re-send pages.
It's not quite that simple. Post-copy needs to introduce a protocol
capable of requesting
On 08/03/2011 05:24 PM, Eric B Munson wrote:
This set is just a rough first pass at avoiding soft lockup warnings when a host
pauses the execution of a guest. A flag is set by the host in the shared page
used for the pvclock when the host goes to stop the guest. When the guest
resumes and
On 07/15/2011 04:12 PM, Lucas Meneghel Rodrigues wrote:
Hi guys, due to some personal issues this week this tip came later than
I wanted, but nevertheless, here it is:
http://autotest.kernel.org/wiki/KVMAutotest/RunQemuUnittests
Ever wanted to make kvm autotest to execute the qemu-kvm unittest
I tried to re-arrange all of the requirements and use cases using this
wiki page: http://wiki.qemu.org/Features/LiveBlockMigration
It would be the best to agree upon the most interesting use cases (while
we make sure we cover future ones) and agree to them.
The next step is to set the
On 07/05/2011 03:58 PM, Marcelo Tosatti wrote:
On Tue, Jul 05, 2011 at 01:40:08PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 5, 2011 at 9:01 AM, Dor Laordl...@redhat.com wrote:
I tried to re-arrange all of the requirements and use cases using this wiki
page:
On 07/05/2011 05:32 PM, Marcelo Tosatti wrote:
On Tue, Jul 05, 2011 at 04:39:06PM +0300, Dor Laor wrote:
On 07/05/2011 03:58 PM, Marcelo Tosatti wrote:
On Tue, Jul 05, 2011 at 01:40:08PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 5, 2011 at 9:01 AM, Dor Laordl...@redhat.com wrote:
I tried
On 06/23/2011 09:31 AM, Chaitra Gorantla wrote:
Thanks for the information.
Is there any way to calculate the delta between host and the guest?
Use ntddate server
The ntp server can either be on the host or anywhere else.
On 06/22/2011 06:17 AM, Chaitra Gorantla wrote:
Hi all,
We are
On 06/22/2011 06:17 AM, Chaitra Gorantla wrote:
Hi all,
We are working on Fedora 15 Host. And the KVM is used to create Fedora 14 guest.
The clock-source details are as below.
Our Host supports constant_tsc.
on HOST OS
cat /sys/devices/system/clocksource/clocksource0/current_clocksource
tsc
Not sure it might help but IIRC the original e1000 driver for windows
had some bugs that were fixed if you'll download the most recent driver
from Intel site. This was the case for the fully emulated e1000 qemu
device and might help here too.
On 06/19/2011 03:29 PM, Flypen CloudMe wrote:
Hi,
On 05/16/2011 11:23 AM, Jagane Sundar wrote:
Hello Dor,
Let me see if I understand live snapshot correctly:
If I want to configure a VM for daily backup, then I would do
the following:
- Create a snapshot s1. s0 is marked read-only.
- Do a full backup of s0 on day 0.
- On day 1, I would create
On 05/16/2011 12:38 AM, Jagane Sundar wrote:
Hello Dor,
One important advantage of live snapshot over live backup is support of
multiple (consecutive) live snapshots while there can be only a single
live backup at one time.
This is why I tend to think that although live backup carry some
The abstract submission deadline was originally set for today.
The forum committee agreed to extend the deadline period until next
Sunday, May 22.
The notification date remains the same (May 31).
Thanks,
your KVM Forum 2011 Program Committee
On 04/21/2011 08:21 PM,
On 05/15/2011 10:18 PM, Fred van Zwieten wrote:
Hello Jagane,
My original idea was this: Have a data disk implemented as an LV. Use
drbd or dm-replicator to asynchronously replicate the vg to a remote
location. When I make a snapshot of the LV, the snapshot will be
It can work but it might be
On 02/17/2011 12:09 PM, Vadim Rozenfeld wrote:
On Thu, 2011-02-17 at 11:11 +0200, Avi Kivity wrote:
On 02/16/2011 09:54 PM, --[ UxBoD ]-- wrote:
Hello all,
I believe I am hitting a problem on one of our Windows 2003 KVM guests were I
believe it is running out of Entropy and causing SSL
On 12/13/2010 09:42 PM, Manfred Heubach wrote:
Gleb Natapovglebat redhat.com writes:
On Wed, Jul 28, 2010 at 12:53:02AM +0300, Harri Olin wrote:
Gleb Natapov wrote:
On Wed, Jul 21, 2010 at 09:25:31AM +0300, Harri Olin wrote:
Gleb Natapov kirjoitti:
On Mon, Jul 19, 2010 at 10:17:02AM
On 11/29/2010 06:23 PM, Stefan Hajnoczi wrote:
On Mon, Nov 29, 2010 at 3:00 PM, Yoshiaki Tamura
tamura.yoshi...@lab.ntt.co.jp wrote:
2010/11/29 Paul Brookp...@codesourcery.com:
If devices incorrectly claim support for live migration, then that should
also be fixed, either by removing the
On 11/23/2010 08:41 AM, Avi Kivity wrote:
On 11/23/2010 01:00 AM, Anthony Liguori wrote:
qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of
teaching
them to respond to these signals, introduce monitor commands that stop
and start
individual vcpus.
The purpose of these commands
FYI. Long ago we discussed key value approach on top of virtio-serial.
Original Message
Subject: [PATCH]: An implementation of HyperV KVP functionality
Date: Thu, 11 Nov 2010 13:03:10 -0700
From: Ky Srinivasan ksriniva...@novell.com
To: de...@driverdev.osuosl.org,
On 10/21/2010 12:22 PM, Andrew Beekhof wrote:
On 10/20/2010 02:16 PM, Dor Laor wrote:
On 10/20/2010 10:21 AM, Alexander Graf wrote:
On 19.10.2010, at 17:14, Chris Wright wrote:
Guest Agent
- have one coming RSN (poke Anthony for details)
Would there be a chance to have a single agent
On 10/21/2010 03:02 PM, Anthony Liguori wrote:
On 10/21/2010 02:45 AM, Paolo Bonzini wrote:
On 10/21/2010 03:14 AM, Alexander Graf wrote:
I agree that some agent code for basic stuff like live snapshot
sync with the filesystem is small enough and worth to host within
qemu. Maybe we do need
On 10/20/2010 10:21 AM, Alexander Graf wrote:
On 19.10.2010, at 17:14, Chris Wright wrote:
0.13.X -stable
- Anthony will send note to qemu-devel on this
- move 0.13.X -stable to a separate tree
- driven independently of main qemu tree
- challenge is always in the porting and testing of
On 10/20/2010 03:21 PM, Anthony Liguori wrote:
On 10/20/2010 08:19 AM, Daniel P. Berrange wrote:
The thinking with Matahari is that there is significant overlap between
agent requirements for a physical and virtual host, so it aims to provide
an agent that works everywhere, whether virtualized
On 10/19/2010 04:11 AM, Chris Wright wrote:
* Juan Quintela (quint...@redhat.com) wrote:
Please send in any agenda items you are interested in covering.
- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals
- Live snapshots
- We were asked to add this feature for
On 10/19/2010 02:55 PM, Avi Kivity wrote:
On 10/19/2010 02:48 PM, Dor Laor wrote:
On 10/19/2010 04:11 AM, Chris Wright wrote:
* Juan Quintela (quint...@redhat.com) wrote:
Please send in any agenda items you are interested in covering.
- 0.13.X -stable handoff
- 0.14 planning
- threadlet
On 08/16/2010 10:00 PM, Alex Rixhardson wrote:
Hi guys,
I have the following configuration:
1. host is RHEL 5.5, 64bit with KVM (version that comes out of the box
with RHEL 5.5)
2. two guests:
2a: RHEL 5.5, 32bit,
2b: RHEL 4.5, 64bit
If I run iperf between host RHEL 5.5 and guest RHEL 5.5
On 08/17/2010 12:22 AM, Alex Rixhardson wrote:
Thanks for the suggestion.
I tried with the netperf. I ran netserver on host and netperf on RHEL
5.5 and RHEL 4.5 guests. This are the results of 60 seconds long
tests:
RHEL 4.5 guest:
Throughput (10^6bits/sec) = 145.80
At least it bought you
On 08/17/2010 12:51 AM, Alex Rixhardson wrote:
I tried with 'notsc divider=10' (since it's 64 bit guest), but the
results are the still same :-(. The guest is idle at the time of
testing. It has 2 CPU and 1024 MB RAM available.
Hmm, are you using e1000 or virtio for the 4.5 guest?
e1000 should
On 08/03/2010 02:36 AM, Anthony Liguori wrote:
On 08/02/2010 05:42 PM, Andre Przywara wrote:
Anthony Liguori wrote:
On 08/02/2010 08:49 AM, Ulrich Drepper wrote:
glibc uses the cache size information returned by cpuid to perform
optimizations. For instance, copy operations which would pollute
On 08/02/2010 11:50 PM, Stefan Hajnoczi wrote:
On Mon, Aug 2, 2010 at 6:46 PM, Anthony Liguorianth...@codemonkey.ws wrote:
On 08/02/2010 12:15 PM, John Leach wrote:
Hi,
I've come across a problem with read and write disk IO performance when
using O_DIRECT from within a kvm guest. With
On 07/01/2010 07:05 PM, Lucas Meneghel Rodrigues wrote:
On Thu, 2010-07-01 at 17:42 +0300, Avi Kivity wrote:
On 06/30/2010 06:39 PM, Lucas Meneghel Rodrigues wrote:
By default, HPET is enabled on qemu and no time drift
mitigation is being made for it. So, add -no-hpet
if qemu supports it,
On 06/09/2010 01:31 PM, Gordan Bobic wrote:
On 06/09/2010 09:56 AM, Paolo Bonzini wrote:
Or is this too crazy an idea?
It should work. Note that the the malloced memory should be aligned in
order to get better sharing.
Within glibc malloc large blocks are mmaped, so they are automatically
On 06/08/2010 09:43 PM, Gordan Bobic wrote:
Is this plausible?
I'm trying to work out if it's even worth considering this approach to
enable all memory used by in a system to be open to KSM page merging,
rather than only memory used by specific programs aware of it (e.g.
kvm/qemu).
Something
On 05/27/2010 12:17 PM, Tomasz Chmielewski wrote:
Just installed Fedora13 as guest on KVM. However there is no
cross-platform copy and paste feature. I trust I have setup this
feature on other guest sometime before. Unfortunately I can't the
relevant document. Could you please shed me some
On 05/05/2010 11:58 PM, Michael S. Tsirkin wrote:
Generally, the Host end of the virtio ring doesn't need to see where
Guest is up to in consuming the ring. However, to completely understand
what's going on from the outside, this information must be exposed.
For example, host can reduce the
On 04/27/2010 11:14 AM, Avi Kivity wrote:
On 04/27/2010 01:36 AM, Anthony Liguori wrote:
A few comments:
1) The problem was not block watermark itself but generating a
notification on the watermark threshold. It's a heuristic and should
be implemented based on polling block stats.
Polling
On 04/27/2010 11:56 AM, Avi Kivity wrote:
On 04/27/2010 11:48 AM, Dor Laor wrote:
Here's another option: an nbd-like protocol that remotes all BlockDriver
operations except read and write over a unix domain socket. The open
operation returns an fd (SCM_RIGHTS strikes again) that is used
On 04/27/2010 12:22 PM, Avi Kivity wrote:
On 04/27/2010 12:08 PM, Dor Laor wrote:
On 04/27/2010 11:56 AM, Avi Kivity wrote:
On 04/27/2010 11:48 AM, Dor Laor wrote:
Here's another option: an nbd-like protocol that remotes all
BlockDriver
operations except read and write over a unix domain
On 04/23/2010 10:36 AM, Fernando Luis Vázquez Cao wrote:
On 04/23/2010 02:17 PM, Yoshiaki Tamura wrote:
Dor Laor wrote:
[...]
Second, even if it wasn't the case, the tsc delta and kvmclock are
synchronized as part of the VM state so there is no use of trapping it
in the middle.
I should
On 04/21/2010 08:57 AM, Yoshiaki Tamura wrote:
Hi all,
We have been implementing the prototype of Kemari for KVM, and we're sending
this message to share what we have now and TODO lists. Hopefully, we would like
to get early feedback to keep us in the right direction. Although advanced
On 04/22/2010 01:35 PM, Yoshiaki Tamura wrote:
Dor Laor wrote:
On 04/21/2010 08:57 AM, Yoshiaki Tamura wrote:
Hi all,
We have been implementing the prototype of Kemari for KVM, and we're
sending
this message to share what we have now and TODO lists. Hopefully, we
would like
to get early
On 04/22/2010 04:16 PM, Yoshiaki Tamura wrote:
2010/4/22 Dor Laordl...@redhat.com:
On 04/22/2010 01:35 PM, Yoshiaki Tamura wrote:
Dor Laor wrote:
On 04/21/2010 08:57 AM, Yoshiaki Tamura wrote:
Hi all,
We have been implementing the prototype of Kemari for KVM, and we're
sending
On 04/19/2010 12:29 PM, Gleb Natapov wrote:
On Mon, Apr 19, 2010 at 11:21:47AM +0200, Espen Berg wrote:
Den 18.04.2010 11:56, skrev Gleb Natapov:
That's two different things here:
The issue that Espen is reporting is that the hosts have different
frequency and guests that relay on the tsc as
On 04/18/2010 02:21 AM, Espen Berg wrote:
Den 17.04.2010 22:17, skrev Michael Tokarev:
We have three KVM hosts that supports live-migration between them, but
one of our problems is time drifting. The three frontends has different
CPU frequency and the KVM guests adopt the frequency from the
On 03/21/2010 01:29 PM, Thomas Løcke wrote:
Hey,
What is considered best practice when running a KVM host with a
mixture of Linux and Windows guests?
Currently I have ntpd running on the host, and I start my guests using
-rtc base=localhost,clock=host, with an extra -tdf added for
Windows
On 03/17/2010 07:37 AM, kazushi takahashi wrote:
Hi all
Does anybody know exact important date, such as paper deadline
for KVM Forum 2010?
It's not yet official and Chris Wright will publish the dates but last
we talked it was about asking for pretty simple abstracts (a paragraph
or two,
On 03/14/2010 09:10 AM, Gleb Natapov wrote:
On Sun, Mar 14, 2010 at 09:05:50AM +0200, Avi Kivity wrote:
On 03/11/2010 09:08 PM, Marcelo Tosatti wrote:
I have kept --no-hpet in my setup for
months...
Any details about the problems? HPET is important to some guests.
As Gleb mentioned in
On 03/14/2010 12:27 PM, Avi Kivity wrote:
On 03/14/2010 12:23 PM, Dor Laor wrote:
On 03/14/2010 09:10 AM, Gleb Natapov wrote:
On Sun, Mar 14, 2010 at 09:05:50AM +0200, Avi Kivity wrote:
On 03/11/2010 09:08 PM, Marcelo Tosatti wrote:
I have kept --no-hpet in my setup for
months...
Any
On 02/17/2010 12:51 PM, carlopmart wrote:
Hi all,
I need to install several windows KVM (rhel5.4 host fully updated)
guests for iSCSI boot. iSCSI servers are Solaris/OpenSolaris storage
servers and I need to boot windows guests (2008R2 and Win7) using gpxe.
Can i use virtio net dirver during
On 02/14/2010 07:07 PM, Michael Goldish wrote:
- Lucas Meneghel Rodriguesl...@redhat.com wrote:
As our configuration system generates a list of dicts
with test parameters, and that list might be potentially
*very* large, keeping all this information in memory might
be a problem for
On 01/21/2010 05:05 PM, Anthony Liguori wrote:
On 01/20/2010 07:18 PM, john cooper wrote:
Chris Wright wrote:
* Daniel P. Berrange (berra...@redhat.com) wrote:
To be honest all possible naming schemes for '-cpuname' are just as
unfriendly as each other. The only user friendly option is '-cpu
On 01/25/2010 04:21 PM, Anthony Liguori wrote:
On 01/25/2010 03:08 AM, Dor Laor wrote:
qemu-config.[ch], taking a new command line that parses the argument via
QemuOpts, then passing the parsed options to a target-specific function
that then builds the table of supported cpus.
It should just
On 01/06/2010 05:16 PM, Anthony Liguori wrote:
On 01/06/2010 08:48 AM, Dor Laor wrote:
On 01/06/2010 04:32 PM, Avi Kivity wrote:
On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
We can probably default -enable-kvm to -cpu host, as long as we
explain
very carefully that if users wish
On 01/07/2010 10:18 AM, Avi Kivity wrote:
On 01/07/2010 10:03 AM, Dor Laor wrote:
We can debate about the exact name/model to represent the Nehalem
family, I don't have an issue with that and actually Intel and Amd
should define it.
AMD and Intel already defined their names (in cat /proc
On 01/07/2010 11:24 AM, Avi Kivity wrote:
On 01/07/2010 11:11 AM, Dor Laor wrote:
On 01/07/2010 10:18 AM, Avi Kivity wrote:
On 01/07/2010 10:03 AM, Dor Laor wrote:
We can debate about the exact name/model to represent the Nehalem
family, I don't have an issue with that and actually Intel
On 01/07/2010 01:39 PM, Anthony Liguori wrote:
On 01/07/2010 03:40 AM, Dor Laor wrote:
There's no simple solution except to restrict features to what was
available on the first processors.
What's not simple about the above 4 options?
What's a better alternative (that insures users understand
On 01/07/2010 02:00 PM, Avi Kivity wrote:
On 01/07/2010 01:44 PM, Dor Laor wrote:
So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
to say:
(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem
Or let qemu do it automatically for you.
qemu on 2.6.33 doesn't
On 01/07/2010 03:14 PM, Anthony Liguori wrote:
On 01/07/2010 06:40 AM, Avi Kivity wrote:
On 01/07/2010 02:33 PM, Anthony Liguori wrote:
There's another option.
Make cpuid information part of live migration protocol, and then
support something like -cpu Xeon-3550. We would remember the exact
On 01/06/2010 12:09 PM, Gleb Natapov wrote:
On Wed, Jan 06, 2010 at 05:48:52PM +0800, Sheng Yang wrote:
Hi Beth
I still found the emulated HPET would result in some boot failure. For
example, on my 2.6.30, with HPET enabled, the kernel would fail check_timer(),
especially in timer_irq_works().
On 12/15/2009 09:04 PM, Lucas Meneghel Rodrigues wrote:
On Fri, Dec 11, 2009 at 2:34 PM, Jiri Zupkajzu...@redhat.com wrote:
Hello,
we write KSM_overcommit test. If we calculate memory for guest we need to know
which architecture is Guest. If it is a 32b or 32b with PAE or 64b system.
Because
1 - 100 of 212 matches
Mail list logo