Re: [Xen-devel] sleep function in Xen

2015-08-21 Thread xinyue

So sorry for that.
On 2015年08月21日 18:09, Jan Beulich wrote:

On 21.08.15 at 09:07,  wrote:

First of all - please don't cross post; removing xen-users.


  I want to read memory in Xen interval will the guest OS run
normally. I didn't find sleep function in xen source and the delay
function will crack the system. And I think the delay function will also
stop the guest OS. Does anyone have any idea of which function I should
use ?

You should set up a timer.

Jan





___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] sleep function in Xen

2015-08-21 Thread xinyue

Hi all,

I want to read memory in Xen interval will the guest OS run 
normally. I didn't find sleep function in xen source and the delay 
function will crack the system. And I think the delay function will also 
stop the guest OS. Does anyone have any idea of which function I should 
use ?


Thanks and best regards!

xinyue

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-08 Thread xinyue


On 2015年07月08日 14:26, xinyue wrote:

Very sorry for sending wrong before.
On 2015年07月08日 14:13, xinyue wrote:


On 2015年07月07日 19:49, Ian Campbell wrote:

On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:

Please don't use HTML mail and do proper ">" quoting


And after analyzing the performance of hvm domu, I found a process
named "evolution-data-" using almost 99.9% cpu. Does someone known
what's this and why it appears?

evolution-data-server is part of the evolution mail client. It has
nothing to do with Xen I'm afraid so you will have to look elsewhere 
for

why it is taking so much CPU.

Ian.




Sorry for that and thanks very much.

I think the problem maybe caused by the address alignment. The HVM 
DomU crashed after the hypercall and Dom0 crashed later sometimes with 
"Bus error".


I think the function that caused the crash is get_gfn. The related 
code is


unsigned long gfn;
unsigned long mfn;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
uint32_t pfec = PFEC_page_present;
p2m_type_t t;
gfn = paging_gva_to_gfn(current, 0xc029, &pfec);
mfn = get_gfn(d, gfn, &t);

Is that I lost some type translation?


Thanks and best regards!

xinyue



Thanks for all advices, I found the problem appeared because I forget 
adding function put_gfn.


Thanks again and best regards!

xinyue

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-07 Thread xinyue

Very sorry for sending wrong before.
On 2015年07月08日 14:13, xinyue wrote:


On 2015年07月07日 19:49, Ian Campbell wrote:

On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:

Please don't use HTML mail and do proper ">" quoting


And after analyzing the performance of hvm domu, I found a process
named "evolution-data-" using almost 99.9% cpu. Does someone known
what's this and why it appears?

evolution-data-server is part of the evolution mail client. It has
nothing to do with Xen I'm afraid so you will have to look elsewhere for
why it is taking so much CPU.

Ian.




Sorry for that and thanks very much.

I think the problem maybe caused by the address alignment. The HVM DomU 
crashed after the hypercall and Dom0 crashed later sometimes with "Bus 
error".


I think the function that caused the crash is get_gfn. The related code is

unsigned long gfn;
unsigned long mfn;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
uint32_t pfec = PFEC_page_present;
p2m_type_t t;
gfn = paging_gva_to_gfn(current, 0xc029, &pfec);
mfn = get_gfn(d, gfn, &t);

Is that I lost some type translation?


Thanks and best regards!

xinyue

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-07 Thread xinyue


On 2015年07月07日 19:49, Ian Campbell wrote:

On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:

Please don't use HTML mail and do proper ">" quoting


And after analyzing the performance of hvm domu, I found a process
named "evolution-data-" using almost 99.9% cpu. Does someone known
what's this and why it appears?

evolution-data-server is part of the evolution mail client. It has
nothing to do with Xen I'm afraid so you will have to look elsewhere for
why it is taking so much CPU.

Ian.


Sorry for that and thanks very much.

I think the problem maybe caused by the address

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-07 Thread xinyue





在 2015-07-06, Mon, 16:11:02 ,Andrew Cooper 写到:





On 06/07/2015 08:58, xinyue wrote:

 










在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:

On 06/07/2015 08:22, xinyue wrote:

 


Hi,

    For I want 
to translate the virtual address in HVM DomU to virtual address in Xen. But 
when I use the function paging_gva_to_gfn and get_gfn, I can feel the 
performance down quickly, the machine become very hot and then I have to force 
the machine shutting down.



Your machine clearly isn't cooled 
sufficiently, which is the first problem.




The codes I used as below:
    uint32_t 
pfec = PFEC_page_present;
    unsigned 
long gfn;
    unsigned 
long mfn;
    unsigned 
long virtaddr;
    struct vcpu 
*vcpu = current;
    struct 
domain *d = vcpu->domain;
    p2m_type_t t;

    gfn = 
paging_gva_to_gfn(current, 0xc029, &pfec);
    mfn = 
get_gfn(d, gfn, &t);
    virtaddr = 
map_domain_page(mfn_x(mfn));

I also use the dbg_hvm_va2mfn 
function in debug.c, performance problem still present.



Walking pagetables in software is slow.  
There is no getting around this.

Your performance problems will be caused by 
performing the operation far too often.  You should find a way to reduce 
this.




Thanks very much, I think I only do this for 
just once. And after the thanslation is done, the performance is not turn to 
normal. Does that mean that if I wait long enough it will recovery?


It almost certainly means you are not doing it just once like 
you suppose.

~andrew    

    Yes, you are right. I added printk in 
get_gfn and found it was call many times. I'll check why that happens. 
Thanks a lot!

Sorry for mistaking it, the calls of these functions in log 
appear before I invoke them. These functions that I invoked are through 
hypercall in HVM DomU, through the log I think I invoked them only once. Maybe 
the performance problem is caused by the parameters I used? Could you help me 
the check if I used them unproperly as posted before.

Another question is when I add printk in paging_gva_to gfn 
function, the performance alse down serioursly that it can't even boot hvm 
domu successfully. I am wondering why.

Thanks again and best regards!

And after analyzing the performance of hvm domu, I found a 
process named "evolution-data-" using almost 99.9% cpu. Does someone 
known what's this and why it appears?


    

xinyue


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-07 Thread xinyue




在 2015-07-06, Mon, 16:11:02 ,Andrew Cooper 写到:



On 06/07/2015 08:58, xinyue wrote:

 








在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:

On 06/07/2015 08:22, xinyue wrote:

 


Hi,

    For I want to 
translate the virtual address in HVM DomU to virtual address in Xen. But when I 
use the function paging_gva_to_gfn and get_gfn, I can feel the performance down 
quickly, the machine become very hot and then I have to force the machine 
shutting down.



Your machine clearly isn't cooled sufficiently, 
which is the first problem.




The codes I used as below:
    uint32_t pfec = 
PFEC_page_present;
    unsigned long gfn;
    unsigned long mfn;
    unsigned long 
virtaddr;
    struct vcpu *vcpu = 
current;
    struct domain *d = 
vcpu->domain;
    p2m_type_t t;

    gfn = 
paging_gva_to_gfn(current, 0xc029, &pfec);
    mfn = get_gfn(d, 
gfn, &t);
    virtaddr = 
map_domain_page(mfn_x(mfn));

I also use the dbg_hvm_va2mfn function 
in debug.c, performance problem still present.



Walking pagetables in software is slow.  There is 
no getting around this.

Your performance problems will be caused by performing 
the operation far too often.  You should find a way to reduce this.




Thanks very much, I think I only do this for just once. 
And after the thanslation is done, the performance is not turn to normal. Does 
that mean that if I wait long enough it will recovery?


It almost certainly means you are not doing it just once like you 
suppose.

~andrew    

    Yes, you are right. I added printk in get_gfn and 
found it was call many times. I'll check why that happens. Thanks a lot!

Sorry for mistaking it, the calls of these functions in log appear 
before I invoke them. These functions that I invoked are through hypercall in 
HVM DomU, through the log I think I invoked them only once. Maybe the 
performance problem is caused by the parameters I used? Could you help me the 
check if I used them unproperly as posted before.

Another question is when I add printk in paging_gva_to gfn function, 
the performance alse down serioursly that it can't even boot hvm domu 
successfully. I am wondering why.

Thanks again and best regards!
    

xinyue

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-06 Thread xinyue



在 2015-07-06, Mon, 16:11:02 ,Andrew Cooper 写到:


On 06/07/2015 08:58, xinyue wrote:

 






在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:

On 06/07/2015 08:22, xinyue wrote:

 


Hi,

    For I want to translate the 
virtual address in HVM DomU to virtual address in Xen. But when I use the 
function paging_gva_to_gfn and get_gfn, I can feel the performance down 
quickly, the machine become very hot and then I have to force the machine 
shutting down.



Your machine clearly isn't cooled sufficiently, which is 
the first problem.




The codes I used as below:
    uint32_t pfec = 
PFEC_page_present;
    unsigned long gfn;
    unsigned long mfn;
    unsigned long virtaddr;
    struct vcpu *vcpu = current;
    struct domain *d = 
vcpu->domain;

    gfn = 
paging_gva_to_gfn(current, 0xc029, &pfec);
    mfn = get_gfn(d, gfn, 
&t);
    virtaddr = 
map_domain_page(mfn_x(mfn));

I also use the dbg_hvm_va2mfn function in 
debug.c, performance problem still present.



Walking pagetables in software is slow.  There is no 
getting around this.

Your performance problems will be caused by performing the 
operation far too often.  You should find a way to reduce this.




Thanks very much, I think I only do this for just once. And 
after the thanslation is done, the performance is not turn to normal. Does that 
mean that if I wait long enough it will recovery?


It almost certainly means you are not doing it just once like you suppose.

~andrew    

    Yes, you are right. I added printk in get_gfn and found it 
was call many times. I'll check why that happens. Thanks a lot!


xinyue
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Performance problem about address translation

2015-07-06 Thread xinyue



在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:


On 06/07/2015 08:22, xinyue wrote:

 



Hi,

    For I want to translate the virtual address 
in HVM DomU to virtual address in Xen. But when I use the function 
paging_gva_to_gfn and get_gfn, I can feel the performance down quickly, the 
machine become very hot and then I have to force the machine shutting down.



Your machine clearly isn't cooled sufficiently, which is the first problem.




The codes I used as below:
    uint32_t pfec = PFEC_page_present;
    unsigned long gfn;
    unsigned long mfn;
    unsigned long virtaddr;
    struct vcpu *vcpu = current;
    struct domain *d = vcpu->domain;

    gfn = paging_gva_to_gfn(current, 0xc029, 
&pfec);
    mfn = get_gfn(d, gfn, &t);
    virtaddr = map_domain_page(mfn_x(mfn));

I also use the dbg_hvm_va2mfn function in debug.c, performance 
problem still present.



Walking pagetables in software is slow.  There is no getting around this.

Your performance problems will be caused by performing the operation far too 
often.  You should find a way to reduce this.




Thanks very much, I think I only do this for just once. And after the 
thanslation is done, the performance is not turn to normal. Does that mean that 
if I wait long enough it will recovery?




~Andrew
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] Performance problem about address translation

2015-07-06 Thread xinyue
Hi,

    For I want to translate the virtual address in HVM DomU to 
virtual address in Xen. But when I use the function paging_gva_to_gfn and 
get_gfn, I can feel the performance down quickly, the machine become very hot 
and then I have to force the machine shutting down.

The codes I used as below:
    uint32_t pfec = PFEC_page_present;
    unsigned long gfn;
    unsigned long mfn;
    unsigned long virtaddr;
    struct vcpu *vcpu = current;
    struct domain *d = vcpu->domain;

    gfn = paging_gva_to_gfn(current, 0xc029, &pfec);
    mfn = get_gfn(d, gfn, &t);
    virtaddr = map_domain_page(mfn_x(mfn));

I also use the dbg_hvm_va2mfn function in debug.c, performance problem still 
present.

I don't know why, could someone give me some advices.

Thanks for any advice and best regards!

xinyue
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] xen physical address(paddr)and machine address (maddr)

2015-06-30 Thread xinyue
Sorry for the unproper behaviors.

 2015-06-29, Mon, 20:57:41 ,Ian Campbell :
On Mon, 2015-06-29 at 18:35 +0800, xinyue wrote: Please don't top post. 
Please have another read of hte Etiquette section of 
http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions . > Thanks very much,I 
think it‘s very helpful and I'll try it. > > Does "for an 
arbitrary guest page it is unlikely to be from the > xenheap" means 
that some guest pages can't be mapped to xen heap page?
Any given page is either a domheap page or a xenheap page, so there is no 
mapping in the sense you seem to be thinking of.

So , the domheap pages and xenheap pages are both controled by xen? The guest 
pages can be mapped to domheap pages and we can monitor these pages in xen 
hypervisor to see the content of guest pages?

> And I wonder in HVM doms with EPT supported, is there any difference > 
if I won't monitor guest domains memory regions? I'm afraid I don't 
understand the question.

And sorry for the type error.I mean in HVM doms with EPT supported, is 
there any difference if I want monitor guest domains memory regions in xen 
hypervisor? Maybe the functions which translate guest virtual address to xen 
virtual address.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] xen physical address(paddr)and machine address (maddr)

2015-06-30 Thread xinyue
Thanks very much,I think it‘s very helpful and I'll try it.

Does "for an arbitrary guest page it is unlikely to be from the 
xenheap" means that some guest pages can't be mapped to xen heap page?

And I wonder in HVM doms with EPT supported, is there any difference if I 
won't monitor guest domains memory regions?

Thanks again and best regards!

xinyue


在 2015-06-29, Mon, 16:21:53 ,Ian Campbell 写到:
On Sun, 2015-06-28 at 13:13 +0800, xinyue wrote: Per 
http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions#Observe_list_etiquette. 
please do not cross post. You question seems to be xen-development related, so 
I have put the other two to bcc. > I wan't to hash the kernel code 
segment of HVM DomU in Xen > hypervisor, so I have to transilate the virtral 
address in VM to the > virtral address in xen. Is there some easy ways ? You 
need to first translate the guest virtual address to a guest physical address 
and then to a machine address which you can then map in to Xen. I think you 
need paging_gva_to_gfn for the first step, then one of the get_gfn* functions, 
then map_domain_page. I don't know if there is a helper which will simplify 
all this. > I read the source code about memory in Xen and confuse the > 
relationship between the paddr and maddr. How does HVM with EPT > translate 
between them. Is paddr the same with virtral address in xen > heap? No. A 
paddr is a physical address, not a virtual one. The latest xen.git contains a 
comment in xen/include/xen/mm.h which describes the different types of memory. 
To get a Xen virtual address for a domheap page you need to use 
(un)map_domain_page on the underlying machine address (or struct page_info *) 
to create a temporary mapping. For xenheap pages you can use other mechanisms, 
but for an arbitrary guest page it is unlikely to be from the xenheap. > 
> > Thanks for any advices and with best regards! > > > xinyue 
> > > ___ > Xen-users 
mailing list > xen-us...@lists.xen.org > http://lists.xen.org/xen-users 
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] xen physical address(paddr)and machine address (maddr)

2015-06-28 Thread xinyue


Hi all,

       I wan't to hash the kernel code 
segment of HVM DomU in Xen hypervisor, so I have to transilate the virtral 
address in VM to the virtral address in xen. Is there some easy ways ?

       I read the source code about memory 
in Xen and confuse the relationship between the paddr and maddr. How does HVM 
with EPT translate between them. Is paddr the same with virtral address in xen 
heap?


Thanks for any advices and with best regards!


    xinyue
      


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] xen physical address(paddr)and machine address (maddr)

2015-06-28 Thread xinyue
Hi all,

   I wan't to hash the kernel code segment of HVM DomU in Xen
hypervisor, so I have to transilate the virtral address in VM to the virtral
address in xen. Is there some easy ways ?

   I read the source code about memory in Xen and confuse the
relationship between the paddr and maddr. How does HVM with EPT translate
between them. Is paddr the same with virtral address in xen heap?


Thanks for any advices and with best regards!


xinyue

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel