On Fri, 19 May 2017 at 16:10, Jay Zhou wrote:
>
> Hi Paolo and Wanpeng,
>
> On 2017/5/17 16:38, Wanpeng Li wrote:
> > 2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
> >>> Recently, I have tested the performance before migration and after
> >>> migration failure
> >>> using spec cpu2006
Hi
qemu had a assert when we use scsi-3 reservation。
This happen when scsi sence is recoverd error。
And which lead scsi_req_complete twice.
static bool scsi_handle_rw_error(SCSIDiskReq *r, int error, bool acct_failed)
{
bool is_read = (r->req.cmd.mode == SCSI_XFER_FROM_DEV);
On 12/10/2018 10:05, Wangguang wrote:
> Hi
>
> qemu had a assert when we use scsi-3 reservation。
>
> This happen when scsi sence is recoverd error。
>
> And which lead scsi_req_complete twice.
>
>
>
>
>
> static bool scsi_handle_rw_error(SCSIDiskReq *r, int error, bool
> acct_failed)
>
>
Am 07.11.2017 um 08:18 hat wang.guan...@zte.com.cn geschrieben:
> hello
>
>
> if we create a qcow2 file on a block dev.
>
>
> we can,t get the right disk size by qemu-img info。
>
>
>
>
>
>
> [root@host-120-79 qemu]# ./qemu-img create -f qcow2 /dev/zs/lvol0 1G
>
> Formatting
hello
if we create a qcow2 file on a block dev.
we can,t get the right disk size by qemu-img info。
[root@host-120-79 qemu]# ./qemu-img create -f qcow2 /dev/zs/lvol0 1G
Formatting '/dev/zs/lvol0', fmt=qcow2 size=1073741824 cluster_size=65536
lazy_refcounts=off refcount_bits=16
Hi Xiao,
On 2017/5/19 16:32, Xiao Guangrong wrote:
I do not know why i was removed from the list.
I was CCed to you...
Your comments are very valuable to us, and thank for your quick response.
On 05/19/2017 04:09 PM, Jay Zhou wrote:
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li
I do not know why i was removed from the list.
On 05/19/2017 04:09 PM, Jay Zhou wrote:
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li wrote:
2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
Recently, I have tested the performance before migration and after migration
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li wrote:
2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
Recently, I have tested the performance before migration and after migration
failure
using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
performance
2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
>> Recently, I have tested the performance before migration and after migration
>> failure
>> using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
>> performance
>> evaluation tool.
>>
>> These are the steps:
>>
> Recently, I have tested the performance before migration and after migration
> failure
> using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
> performance
> evaluation tool.
>
> These are the steps:
> ==
> (1) the version of kmod is 4.4.11(with slightly modified) and the
On 2017/5/17 13:47, Wanpeng Li wrote:
Hi Zhoujian,
2017-05-17 10:20 GMT+08:00 Zhoujian (jay) :
Hi Wanpeng,
On 11/05/2017 14:07, Zhoujian (jay) wrote:
-* Scan sptes if dirty logging has been stopped, dropping those
-* which can be collapsed into a
Hi Zhoujian,
2017-05-17 10:20 GMT+08:00 Zhoujian (jay) :
> Hi Wanpeng,
>
>> > On 11/05/2017 14:07, Zhoujian (jay) wrote:
>> >> -* Scan sptes if dirty logging has been stopped, dropping those
>> >> -* which can be collapsed into a single large-page spte.
Hi Wanpeng,
> > On 11/05/2017 14:07, Zhoujian (jay) wrote:
> >> -* Scan sptes if dirty logging has been stopped, dropping those
> >> -* which can be collapsed into a single large-page spte. Later
> >> -* page faults will create the large-page sptes.
> >> +* Reset
On 2017/5/12 16:09, Xiao Guangrong wrote:
On 05/11/2017 08:24 PM, Paolo Bonzini wrote:
On 11/05/2017 14:07, Zhoujian (jay) wrote:
-* Scan sptes if dirty logging has been stopped, dropping those
-* which can be collapsed into a single large-page spte. Later
-* page
On 05/11/2017 08:24 PM, Paolo Bonzini wrote:
On 11/05/2017 14:07, Zhoujian (jay) wrote:
-* Scan sptes if dirty logging has been stopped, dropping those
-* which can be collapsed into a single large-page spte. Later
-* page faults will create the large-page sptes.
+
2017-05-11 22:18 GMT+08:00 Zhoujian (jay) :
> Hi Wanpeng,
>
>> 2017-05-11 21:43 GMT+08:00 Wanpeng Li :
>> > 2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
>> >>
>> >>
>> >> On 11/05/2017 14:07, Zhoujian (jay) wrote:
>> >>> -*
Hi Wanpeng,
> 2017-05-11 21:43 GMT+08:00 Wanpeng Li :
> > 2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
> >>
> >>
> >> On 11/05/2017 14:07, Zhoujian (jay) wrote:
> >>> -* Scan sptes if dirty logging has been stopped, dropping
> those
> >>> -*
Hi all,
After applying the patch below, the time which
memory_global_dirty_log_stop() function takes is down to milliseconds
of a 4T memory guest, but I'm not sure whether this patch will trigger
other problems. Does this patch make sense?
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
2017-05-11 21:43 GMT+08:00 Wanpeng Li :
> 2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
>>
>>
>> On 11/05/2017 14:07, Zhoujian (jay) wrote:
>>> -* Scan sptes if dirty logging has been stopped, dropping those
>>> -* which can be collapsed into
2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
>
>
> On 11/05/2017 14:07, Zhoujian (jay) wrote:
>> -* Scan sptes if dirty logging has been stopped, dropping those
>> -* which can be collapsed into a single large-page spte. Later
>> -* page faults will
On 11/05/2017 14:07, Zhoujian (jay) wrote:
> -* Scan sptes if dirty logging has been stopped, dropping those
> -* which can be collapsed into a single large-page spte. Later
> -* page faults will create the large-page sptes.
> +* Reset each vcpu's mmu, then page
Hi Paolo, Dave,
On 2017/4/26 23:46, Paolo Bonzini wrote:
>
>
> On 24/04/2017 18:42, Dr. David Alan Gilbert wrote:
>> I suppose there's a few questions;
>> a) Do we actually need the BQL - and if so why
Enable/disable dirty log tracking are operations on memory regions.
That's why they need
On 24/04/2017 18:42, Dr. David Alan Gilbert wrote:
> I suppose there's a few questions;
> a) Do we actually need the BQL - and if so why
> b) What actually takes 13s? It's probably worth figuring
> out where it goes, the whole bitmap is only 1GB isn't it
> even on a 4TB machine, and even
* Yang Hongyang (yanghongy...@huawei.com) wrote:
>
>
> On 2017/4/24 20:06, Juan Quintela wrote:
> > Yang Hongyang wrote:
> >> Hi all,
> >>
> >> We found dirty log switch costs more then 13 seconds while migrating
> >> a 4T memory guest, and dirty log switch is currently
On 2017/4/24 20:06, Juan Quintela wrote:
> Yang Hongyang wrote:
>> Hi all,
>>
>> We found dirty log switch costs more then 13 seconds while migrating
>> a 4T memory guest, and dirty log switch is currently protected by QEMU
>> BQL. This causes guest freeze for a long
Yang Hongyang wrote:
> Hi all,
>
> We found dirty log switch costs more then 13 seconds while migrating
> a 4T memory guest, and dirty log switch is currently protected by QEMU
> BQL. This causes guest freeze for a long time when switching dirty log on,
> and the
Hi all,
We found dirty log switch costs more then 13 seconds while migrating
a 4T memory guest, and dirty log switch is currently protected by QEMU
BQL. This causes guest freeze for a long time when switching dirty log on,
and the migration downtime is unacceptable.
Are there any chance to
Hi ,
Sorry in advance that I am not sure that if I ask this question here is
appropriate. I am new to use KVM and want to do some research in qemu-kvm
live migration.
In my environment, I install centos 7.0 with rpm install qemu-kvm
1.5.3, but what I want to do research is the source
On Wed, Oct 30, 2013 at 2:43 AM, Antony Pavlov antonynpav...@gmail.com wrote:
On Tue, 29 Oct 2013 21:09:30 +0800
Nancy nancydream...@gmail.com wrote:
Some years ago I have made a set of scripts for building MIPS linux kernel and
rootfs from scratch and running it under qemu.
See
Hi,
1. When QEMU for MIPS support watchpoint debug facility? Any hint or guide
to implement that function?
2. qemu-system-mipsel -M malta -kernel vmlinux-2.6.26-1-4kc-malta -hda
debian_lenny_mipsel_small.qcow2 -append root=/dev/hda1 console=ttyS0
kgdboc=ttyS0,115200 kgdbwait -nographic -serial
On Tue, 29 Oct 2013 21:09:30 +0800
Nancy nancydream...@gmail.com wrote:
Some years ago I have made a set of scripts for building MIPS linux kernel and
rootfs from scratch and running it under qemu.
See https://github.com/frantony/clab for details,
especialy see start-qemu.sh script and files in
Am 28.10.2010 04:20, schrieb Zhiyuan Shao:
OK, If I get some time in the close future, I will try to improve the
relevant part (todo list: PAE/PSE(36), IDT, GDT, x86_64, possibly
pipe-like feature) of Qemu that I think it will be helpful for people
debugging code on the i386 platform.
On Wed, Oct 27, 2010 at 1:10 AM, Zhiyuan Shao zys...@mail.hust.edu.cn wrote:
On Tue, 2010-10-26 at 18:59 +, Blue Swirl wrote:
On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao zys...@hust.edu.cn wrote:
Hi team,
I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
code
On Wed, 2010-10-27 at 20:07 +, Blue Swirl wrote:
On Wed, Oct 27, 2010 at 1:10 AM, Zhiyuan Shao zys...@mail.hust.edu.cn wrote:
On Tue, 2010-10-26 at 18:59 +, Blue Swirl wrote:
On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao zys...@hust.edu.cn wrote:
Hi team,
I am a Qemu User,
Hi team,
I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
code (Qemu+GDB).
During the usage, I found the Qemu debugging console (i.e., entered by
pressing Ctl+Alt+2 in Qemu SDL window or by passing -monitor stdio to
Qemu in the command line) is rather difficult to use. It
On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao zys...@hust.edu.cn wrote:
Hi team,
I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
code (Qemu+GDB).
During the usage, I found the Qemu debugging console (i.e., entered by
pressing Ctl+Alt+2 in Qemu SDL window or by passing
On Tue, 2010-10-26 at 18:59 +, Blue Swirl wrote:
On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao zys...@hust.edu.cn wrote:
Hi team,
I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
code (Qemu+GDB).
During the usage, I found the Qemu debugging console (i.e.,
On Tue, Aug 10, 2010 at 12:17, chandra shekar
chandrashekar...@gmail.com wrote:
can any one suggest any study materials for start learning qemu and its
internals
i have already read the documentation in this qemu web page other than that
any
other materials,thanks
I am afraid, the other
can any one suggest any study materials for start learning qemu and its
internals
i have already read the documentation in this qemu web page other than that
any
other materials,thanks
hi iam chandra i am interested in understanding the qemu code can any one
help me from where i have to start
and also i installed qemu on ubuntu 10.04 after installing when i run qemu
as per instruction given in qemu web page
it says vnc server running on some ip and thats it can please some one
start from vl.c, main()
-mj
On Wed, Aug 4, 2010 at 10:29 AM, chandra shekar
chandrashekar...@gmail.com wrote:
hi iam chandra i am interested in understanding the qemu code can any one
help me from where i have to start
and also i installed qemu on ubuntu 10.04 after installing when i run
Hi
I created a full virtualized DOM-U by XEN which is emulated by qemu-dm,
and I dont know hot to monitor it's disk and network in DOM-0.
I know I can press CTRL-ALT-2 to access the monitor, but it should be
done in dom-U.
Is there any solution to get the disk and network information of dom-U
Hi
I created a full virtualized DOM-U by XEN which is emulated by qemu-dm,
and I dont know hot to monitor it's disk and network in DOM-0.
I know I can press CTRL-ALT-2 to access the monitor, but it should be
done in dom-U.
Is there any solution to get the disk and network information of dom-U
VMware handles kernel code. You are right that x86 code can't be 100%
virtualized
(even at the userland level) but VMware uses a lot of nasty disgusting tricks
in order to work around them. (For example, playing with shadow pagetables
so that a page of modified code is run but if the code
I take it self-modifying kernel code would have serious issues.
Seems likely :-) With hardware support, making things like this work should
be *much* easier.
I seem to recall my attempts to run v2OS (which uses a self-modifying
assembly code boot sequence) inside VMWare crashing badly circa
On Tue, Sep 13, 2005 at 09:48:01PM -0500, Anthony Liguori wrote:
Jim C. Brown wrote:
The x86 cannot be virtualized in the Popek/Goldberg sense, so there's
a couple of fast emulation techniques that are possible. Other than a
hand coded dynamic translator, I reckon qemu + kqemu is about as
On Tue, Sep 13, 2005 at 11:27:39PM -0500, Anthony Liguori wrote:
I reckon kqemu has this same problem... Technically, even in ring 3, if
you run natively, you violate the Popek/Goldberg requirements because of
cpuid. It's just not possible to trap it but it shouldn't matter for
most
On Wed, 14 Sep 2005, Jim C. Brown wrote:
Not familar with L4ka. I don't believe that UML does virtualization, it simply
runs linux code 'as is' but intercepts calls to the kernel.
UML does not do hardware virtualization. UML is a special architecture for
the Linux kernel allowing Linux to
Two side footnotes to your comprehensive explanation:
1) with the SKAS host kernel patch you don't have to ptrace the guest
processes and performance (and security) is improved quite a bit, I
understand.
2) UML is currently being ported to run in ring 0. Why? Not for running on
native
There are a couple of interesting paravirtualization techniques too.
There's the Xen approach (really fast, but very invasive), the L4ka
afterburning (theoritically close to as fast, but less invasive), and
then of course the extremes like UML.
Not familar with L4ka. I don't believe that
On Wed, Sep 14, 2005 at 01:46:58PM -0500, Anthony Liguori wrote:
You can't readahead beyond a basic block. Taking a trap for each basic
block and translating the block is what QEMU does.
No, QEMU translates everything from guest machine code into its internal codes.
I'm talking about using
On Wed, Sep 14, 2005 at 10:18:24AM -0700, John R. Hogerhuis wrote:
Why disgusting?
Perhaps you meant disgusting because the Intel architecture forces a
virtualizer to handle a bunch of corner cases like this.
That is exactly what I mean.
-- John.
--
Infinite complexity begets
Alexandre Leclerc wrote:
I'm new to qemu and my question is simple and is probably due to my
ignorance. If I compare qemu and vmware, there is a great deal of
emulation speed differences.
Did you try kqemu or qvm86?
--
Pozdrowienia,
Adrian Smarzewski
On Tue, Sep 13, 2005 at 08:36:29AM -0400, Alexandre Leclerc wrote:
Hi all,
I'm new to qemu and my question is simple and is probably due to my
ignorance. If I compare qemu and vmware, there is a great deal of
emulation speed differences.
- Is it because of what qemu is? (i.e. it is a full
On 9/13/05, Adrian Smarzewski [EMAIL PROTECTED] wrote:
Alexandre Leclerc wrote:
I'm new to qemu and my question is simple and is probably due to my
ignorance. If I compare qemu and vmware, there is a great deal of
emulation speed differences.
Did you try kqemu or qvm86?
Yes, with kqemu.
On Tue, Sep 13, 2005 at 09:58:11AM -0500, Anthony Liguori wrote:
Jim C. Brown wrote:
Fabrice had said that he wants
kqemu to be able to do total virtualization (both kernel and userland
bits);
basically all the translation code of qemu would be left unused but the
hardware emulation
No, I got the impression that Fabrice was taking about virtualization the
way VMware, old plex86, and vmbear (new FOSS x86 virtualizer in the works)
do it.
So it'll work w/o needing a 64bit chip.
I hadn't seen vmbear, looks interesting... Full virtualisation on vanilla x86
would be really
Jim C. Brown wrote:
On Tue, Sep 13, 2005 at 09:58:11AM -0500, Anthony Liguori wrote:
Jim C. Brown wrote:
Fabrice had said that he wants
kqemu to be able to do total virtualization (both kernel and userland bits);
basically all the translation code of qemu would be left unused but
No, I got the impression that Fabrice was taking about virtualization the
way VMware, old plex86, and vmbear (new FOSS x86 virtualizer in the
works) do it.
The x86 cannot be virtualized in the Popek/Goldberg sense, so there's
a couple of fast emulation techniques that are possible. Other
Well, VMware guests can recognise that they're in a VM because the
software contains a backdoor INT function, mainly used by VMware Tools
for things like Shared Folders and host-controlled mouse cursors
insides guests. I don't quite remember what the function was for
VMware's backdoor, but you can
hii have read the pdf file:Embedded Linux kernel and driver developm Training lab book.pdf. at the page 7,when i boot kernel. qemu -m 32 -kernel /lab/linux-2.6.11.11/arch/i386/boot/bzImage -append clock=pit root=/dev/hda -hda /lab/linux/lab1/data/linux_i386.img -boot c i meet the
Well, as far as I can see, you're passing the RAW DEVICE NODE as the
root partition instead of the numbered partition convention.
Instead of passing root=/dev/hda, try something like root=/dev/hda1
I hope that helps.
p.s.: Next time, please, take your time to read what you're doing
before
62 matches
Mail list logo