Re: [Xen-devel] Why gpa instead of mfn is directed used in emulated nic
At 10:14 +0800 on 06 Mar (1425633255), openlui wrote: > At 2015-03-05 19:09:41, "Tim Deegan" wrote: > >Hi, > > > >At 10:54 +0800 on 05 Mar (1425549262), openlui wrote: > >> 2. From the trace info and qemu-dm's log, it seems that it is "GPA" > >> (Guest Physical Address) instead of "MFN" in the IOREQ's data field > >> received by qemu-dm: > > > >Yes. > Thanks for your reply. > > >> I think qemu-dm/rtl8139 should read/write data from "MFN" address in > >> host memory instaed of GPA, and I find that there isn't hypercall > >> from dom0 to "translate" the gpa to mfn in subsequent xentrace > >> info. Is my understanding is wrong? I would really appreciate your > >> help. > > > >The hypercall that qemu uses to map the guest's memory for reading and > >writing also takes a GFN/GPA. So Qemu doesn't need to know what the > >actual MFN is -- it just does all its operations in GPAs and Xen takes > >care of the translations. > > Could you give some hint about the type of hypercall qemu used to map the > guest's memory? I know that there are some xc_map_foreign_xxx interfaces > in libxc which can do the similar work. However, it seems that these > interfaces should be given MFN instead of GFN/GPA. That's some unfortunate historical APIs/docs. :( The argument is in fact a GFN in all cases, but for PV guests, which were implemented first, that's == MFN. Basically anything in the tools that deals with HVM guest memory should _always_ be working in GFN/GPA. Cheers, Tim. ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Re: [Xen-devel] Why gpa instead of mfn is directed used in emulated nic
At 2015-03-05 19:09:41, "Tim Deegan" wrote: >Hi, > >At 10:54 +0800 on 05 Mar (1425549262), openlui wrote: >> 2. From the trace info and qemu-dm's log, it seems that it is "GPA" >> (Guest Physical Address) instead of "MFN" in the IOREQ's data field >> received by qemu-dm: > >Yes. Thanks for your reply. >> I think qemu-dm/rtl8139 should read/write data from "MFN" address in >> host memory instaed of GPA, and I find that there isn't hypercall >> from dom0 to "translate" the gpa to mfn in subsequent xentrace >> info. Is my understanding is wrong? I would really appreciate your >> help. > >The hypercall that qemu uses to map the guest's memory for reading and >writing also takes a GFN/GPA. So Qemu doesn't need to know what the >actual MFN is -- it just does all its operations in GPAs and Xen takes >care of the translations. Could you give some hint about the type of hypercall qemu used to map the guest's memory? I know that there are some xc_map_foreign_xxx interfaces in libxc which can do the similar work. However, it seems that these interfaces should be given MFN instead of GFN/GPA. >Cheers, > >Tim. > >___ >Xen-devel mailing list >Xen-devel@lists.xen.org >http://lists.xen.org/xen-devel ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Re: [Xen-devel] Why gpa instead of mfn is directed used in emulated nic
Hi, At 10:54 +0800 on 05 Mar (1425549262), openlui wrote: > 2. From the trace info and qemu-dm's log, it seems that it is "GPA" > (Guest Physical Address) instead of "MFN" in the IOREQ's data field > received by qemu-dm: Yes. > I think qemu-dm/rtl8139 should read/write data from "MFN" address in > host memory instaed of GPA, and I find that there isn't hypercall > from dom0 to "translate" the gpa to mfn in subsequent xentrace > info. Is my understanding is wrong? I would really appreciate your > help. The hypercall that qemu uses to map the guest's memory for reading and writing also takes a GFN/GPA. So Qemu doesn't need to know what the actual MFN is -- it just does all its operations in GPAs and Xen takes care of the translations. Cheers, Tim. ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] Why gpa instead of mfn is directed used in emulated nic
Hi, all: I want to learn how the emulated NICs work in XEN. So I boot a DomU with an emulated rtl8139 NIC, ping host from DomU and capture the trace info using xentrace tool, and then check the log of qemu-dm and trace info analyzed by xenalyze tool. I have enabled debug in rtl8139.c and added debug info to qemu-dm when receiving ioreq. The host runs XEN 4.4.1 and have EPT enabled. If I understand right, the TX procedure from DomU to Dom0 is as follows: 1. RTL8139 driver in Guest Kernel write Guest Physical Address of data to be sent to the corresponding registers of RTL8139 2. The address space of registers were marked "special" in hvmloader, so the DomU will exit with EPT_VIOLATION reason 3. Hypervisor will handle the exit in ept_handle_violation() function and then call hvm_hap_nested_page_fault(gfn). In the latter function, hypervisor will get the mfn from gfn and judge that the gfn is emulated MMIO, and then pass the fault to mmio handler 4. The MMIO handler will generate an IOREQ with related info and push it to "Shared Memory Page" between DomU and Qemu-Dm in Dom0, which is initialized when booting DomU via hvm_init() function, and then notify qemu-dm via event channel which is also initialized in hvm_init function 5. QEMU-DM will get the IOREQ and call the corresponding callbacks registered by RTL8139.c in qemu However, from the qemu-dm's log and trace info, I have two questions: 1. In step 4 mentioned above, is the ioreq passed through "shared memory page" or "buffed io page"? Both of them are initialized in hvm_init function. 2. From the trace info and qemu-dm's log, it seems that it is "GPA" (Guest Physical Address) instead of "MFN" in the IOREQ's data field received by qemu-dm: a. EPT_VIOLATION related records in xentrace: ] 1.744702725 -x-- d3v1 vmexit exit_reason EPT_VIOLATION eip a01624ae 1.744702725 -x-- d3v1 npf gpa f2051020 q 182 mfn t 4 ] 1.744702725 -x-- d3v1 mmio_assist w gpa f2051020 data d4e6c400 1.744706888 -x-- d3v1 runstate_change d0v0 blocked->runnable As shown above, the value of data is "d4e6c400", I think it is the data represents the txbuf address written by DomU, so it should be the GPA of DomU. From the code I do find that the data is the "ram_gpa" argument in xen/arch/x86/hvm/emulate.c:hvmemul_do_io function. b. Related ioreq received by qemu-dm and handled by RTL8139.c in Dom0: xen: I/O request: 1, ptr: 0, port: f2051020, data: d4e6c400, count: 1, size: 4 RTL8139: TxAddr write offset=0x0 val=0xd4e6c400 As shown above, the data field of ioreq received by qemu-dm is also "0xd4e6c400" which is the GPA of DomU. c. RTL8139.c does the transmit work: RTL8139: C+ TxPoll write(b) val=0x40 RTL8139: C+ TxPoll normal priority transmission RTL8139: +++ C+ mode reading TX descriptor 0 from host memory at d4e6c400 = 0xd4e6c400 RTL8139: +++ C+ mode TX descriptor 0 b05a ead20802 RTL8139: +++ C+ Tx mode : transmitting from descriptor 0 RTL8139: +++ C+ Tx mode : descriptor 0 is first segment descriptor RTL8139: +++ C+ mode transmission buffer allocated space 65536 RTL8139: +++ C+ mode transmit reading 90 bytes from host memory at ead20802 to offset 0 RTL8139: +++ C+ Tx mode : descriptor 0 is last segment descriptor RTL8139: +++ C+ mode transmitting 90 bytes packet RTL8139: +++ C+ mode reading TX descriptor 1 from host memory at d4e6c400 = 0xd4e6c410 RTL8139: +++ C+ mode TX descriptor 1 RTL8139: C+ Tx mode : descriptor 1 is owned by host RTL8139: Set IRQ to 1 (0004 80ff) As shown above, RTL8139 will read TX descriptor from HOST MEMORY at 0xd4e6c400, which is the GPA of DomU. I think qemu-dm/rtl8139 should read/write data from "MFN" address in host memory instaed of GPA, and I find that there isn't hypercall from dom0 to "translate" the gpa to mfn in subsequent xentrace info. Is my understanding is wrong? I would really appreciate your help. -- Best Regards ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel