---
>> From: Orit Wasserman
>> To: "\"陳韋任 (Wei-Ren Chen)\""
>> Cc:
>> Date: Tue, 19 Jun 2012 12:01:08 +0300
>> Subject: Re: [Qemu-devel] How to measure guest memory access
>> (qemu_ld/qemu_st) time?
>> On 06/19/2012 11:49 AM, 陳韋任 (Wei-Ren
people.cs.nctu.edu.tw/~chenwj
>
>
> -- Forwarded message --
> From: Orit Wasserman
> To: "\"陳韋任 (Wei-Ren Chen)\""
> Cc:
> Date: Tue, 19 Jun 2012 12:01:08 +0300
> Subject: Re: [Qemu-devel] How to measure guest memory access
> (qemu_ld/qem
CC'ed to the mailing list.
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
Homepage: http://people.cs.nctu.edu.tw/~chenwj
--- Begin Message ---
On 06/19/2012 11:49 AM, 陳韋任 (Wei-Ren Chen) wrote:
> Mind me
On Tue, Jun 19, 2012 at 4:26 AM, Lluís Vilanova wrote:
> Blue Swirl writes:
>
>> On Mon, Jun 18, 2012 at 8:28 AM, 陳韋任 (Wei-Ren Chen)
>> wrote:
The reason why we want to do the measuring is we want to use KVM (sounds
crazy
idea) MMU virtualization to speedup the guest -> host mem
On Tue, Jun 19, 2012 at 3:52 PM, 陳韋任 (Wei-Ren Chen)
wrote:
>> But if QEMU/TCG is doing a GVA->GPA translation as Wei-Ren said, I don't see
>> how
>> KVM can help.
>
> Just want to clarify. QEMU maintain a TLB (env->tlb_table) which stores GVA
> ->
> HVA mapping, it is used to speedup the addres
Michael Kang writes:
> On Tue, Jun 19, 2012 at 4:26 AM, Lluís Vilanova wrote:
[...]
>> I could understand having multiple 32bit regions in QEMU's virtual space (no
>> need for KVM), one per guest page table, and then simply adding an offset to
>> every memory access to redirect it to the appropri
On 06/19/2012 11:49 AM, 陳韋任 (Wei-Ren Chen) wrote:
> Mind me CC this to ML? :)
sure I will read the threads to understand more.
Orit
>
>> Well it was a while back (2008-9) ,the company was acquired by IBM a year
>> later :
>> http://www.linux-kvm.org/wiki/images/9/98/KvmForum2008%24kdf2008_2.pd
> But if QEMU/TCG is doing a GVA->GPA translation as Wei-Ren said, I don't see
> how
> KVM can help.
Just want to clarify. QEMU maintain a TLB (env->tlb_table) which stores GVA ->
HVA mapping, it is used to speedup the address translation. If TLB miss, QEMU
will call cpu_arm_handle_mmu_fault (t
Blue Swirl writes:
> On Mon, Jun 18, 2012 at 8:28 AM, 陳韋任 (Wei-Ren Chen)
> wrote:
>>> The reason why we want to do the measuring is we want to use KVM (sounds
>>> crazy
>>> idea) MMU virtualization to speedup the guest -> host memory address
>>> translation.
>>> I talked to some people on Lin
陳韋任 (Wei-Ren Chen) writes:
>> The reason why we want to do the measuring is we want to use KVM (sounds
>> crazy
>> idea) MMU virtualization to speedup the guest -> host memory address
>> translation.
>> I talked to some people on LinuxCon Japan, included Paolo, about this idea.
>> The
>> feedba
On Mon, Jun 18, 2012 at 8:28 AM, 陳韋任 (Wei-Ren Chen)
wrote:
>> The reason why we want to do the measuring is we want to use KVM (sounds
>> crazy
>> idea) MMU virtualization to speedup the guest -> host memory address
>> translation.
>> I talked to some people on LinuxCon Japan, included Paolo,
On Mon, Jun 18, 2012 at 6:57 AM, 陳韋任 (Wei-Ren Chen)
wrote:
>> The idea looks nice, but instead of different TLB functions selected
>> at configure time, the optimization should be enabled by default.
>>
>> Maybe a 'call' instruction could be used to jump to the slow path,
>> that way the slow path
On Mon, Jun 18, 2012 at 4:59 AM, YeongKyoon Lee
wrote:
>> The idea looks nice, but instead of different TLB functions selected
>> at configure time, the optimization should be enabled by default.
>>
>> Maybe a 'call' instruction could be used to jump to the slow path,
>> that way the slow path cou
> The reason why we want to do the measuring is we want to use KVM (sounds
> crazy
> idea) MMU virtualization to speedup the guest -> host memory address
> translation.
> I talked to some people on LinuxCon Japan, included Paolo, about this idea.
> The
> feedback I got is we can only use shado
> The idea looks nice, but instead of different TLB functions selected
> at configure time, the optimization should be enabled by default.
>
> Maybe a 'call' instruction could be used to jump to the slow path,
> that way the slow path could be shared.
I don't understand what "maybe a 'call' ins
> The idea looks nice, but instead of different TLB functions selected
> at configure time, the optimization should be enabled by default.
>
> Maybe a 'call' instruction could be used to jump to the slow path,
> that way the slow path could be shared.
I had considered the approach of sharing slow
a 'call' instruction could be used to jump to the slow path,
that way the slow path could be shared.
>
> Thanks.
>
> --- Original Message ---
> Sender : 陳韋任 (Wei-Ren Chen)
> Date : 2012-06-14 12:31 (GMT+09:00)
> Title : Re: [Qemu-devel] How to measure guest memor
Laurent Desnogues writes:
> On Fri, Jun 15, 2012 at 12:30 AM, Lluís Vilanova wrote:
> [...]
>> Now that I think of it, you will have problems generating code to surround
>> each
>> qemu_ld/st with a lightweight mechanism to get the time. In x86 it would be
>> rdtsc, but you want to generate a ho
On Fri, Jun 15, 2012 at 12:30 AM, Lluís Vilanova wrote:
[...]
> Now that I think of it, you will have problems generating code to surround
> each
> qemu_ld/st with a lightweight mechanism to get the time. In x86 it would be
> rdtsc, but you want to generate a host rdtsc instruction inside the cod
Title: Samsung Enterprise Portal mySingle
I don't know much of instrumentation approach of Lluis, however, I had roughly estimated guest memory access (qemu_ld/st performance) overhead in legacy way last year.
My idea was indirect performance estimation by measuring the generated host instruction
陳韋任 (Wei-Ren Chen) writes:
>> Unfortunately, I had the bad idea of rebasing all my series on top of the
>> latest
>> makefile changes, and I'll have to go through each patch to check it's still
>> working (I'm sure some of them broke).
> Need some help? :)
Well, it's just a matter of going th
陳韋任 (Wei-Ren Chen) writes:
> On Wed, Jun 13, 2012 at 12:43:28PM +0200, Laurent Desnogues wrote:
>> On Wed, Jun 13, 2012 at 5:14 AM, 陳韋任 (Wei-Ren Chen)
>> wrote:
>> > Hi all,
>> >
>> > I suspect that guest memory access (qemu_ld/qemu_st) account for the
>> > major of
>> > time spent in system mo
atest sources.
Thanks.
--- Original Message ---
Sender : 陳韋任 (Wei-Ren Chen)
Date : 2012-06-14 12:31 (GMT+09:00)
Title : Re: [Qemu-devel] How to measure guest memory access (qemu_ld/qemu_st)
time?
> As a side note, it might be interesting to gather statistics about the hit
> rate of t
> As a side note, it might be interesting to gather statistics about the hit
> rate of the QEMU TLB. Another thing to consider is speeding up the
> fast path; see YeongKyoon Lee RFC patch:
>
> http://www.mail-archive.com/qemu-devel@nongnu.org/msg91294.html
I only see PATCH 0/3, any idea on wh
> Unfortunately, I had the bad idea of rebasing all my series on top of the
> latest
> makefile changes, and I'll have to go through each patch to check it's still
> working (I'm sure some of them broke).
Need some help? :)
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institut
On Wed, Jun 13, 2012 at 12:43:28PM +0200, Laurent Desnogues wrote:
> On Wed, Jun 13, 2012 at 5:14 AM, 陳韋任 (Wei-Ren Chen)
> wrote:
> > Hi all,
> >
> > I suspect that guest memory access (qemu_ld/qemu_st) account for the major
> > of
> > time spent in system mode. I would like to know precisely ho
Stefan Hajnoczi writes:
> On Wed, Jun 13, 2012 at 4:14 AM, 陳韋任 (Wei-Ren Chen)
> wrote:
>> I suspect that guest memory access (qemu_ld/qemu_st) account for the major
>> of
>> time spent in system mode. I would like to know precisely how much (if
>> possible).
>> We use tools like perf [1] befor
On Wed, Jun 13, 2012 at 5:14 AM, 陳韋任 (Wei-Ren Chen)
wrote:
> Hi all,
>
> I suspect that guest memory access (qemu_ld/qemu_st) account for the major of
> time spent in system mode. I would like to know precisely how much (if
> possible).
> We use tools like perf [1] before, but since the logic of
On Wed, Jun 13, 2012 at 4:14 AM, 陳韋任 (Wei-Ren Chen)
wrote:
> I suspect that guest memory access (qemu_ld/qemu_st) account for the major of
> time spent in system mode. I would like to know precisely how much (if
> possible).
> We use tools like perf [1] before, but since the logic of guest memor
Hi all,
I suspect that guest memory access (qemu_ld/qemu_st) account for the major of
time spent in system mode. I would like to know precisely how much (if
possible).
We use tools like perf [1] before, but since the logic of guest memory access
aslo
embedded in the host binary not only helper
30 matches
Mail list logo