Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-10-07 Thread Oleksandr



On 07.10.20 13:38, Julien Grall wrote:

Hi Oleksandr,


Hi Julien.





On 02/10/2020 10:55, Oleksandr wrote:
If I got it correctly there won't be a suitable common place where to 
set qemu_mapcache_invalidate flag anymore
as XENMEM_decrease_reservation is not a single place we need to make 
a decision whether to set it
By principle of analogy, on Arm we probably want to do so in 
guest_physmap_remove_page (or maybe better in p2m_remove_mapping).

Julien, what do you think?


At the moment, the Arm code doesn't explicitely remove the existing 
mapping before inserting the new mapping. Instead, this is done 
implicitely by p2m_set_entry().


Got it.





So I think we want to invalidate the QEMU mapcache in p2m_set_entry() 
if the old entry is a RAM page *and* the new MFN is different.


Thank you. I hope, the following is close to what was suggested (didn't 
test yet):



diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ae8594f..512eea9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1073,7 +1073,14 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
  */
 if ( p2m_is_valid(orig_pte) &&
  !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
+    {
+#ifdef CONFIG_IOREQ_SERVER
+    if ( domain_has_ioreq_server(p2m->domain) &&
+ (p2m->domain == current->domain) && 
p2m_is_ram(orig_pte.p2m.type) )

+    p2m->domain->qemu_mapcache_invalidate = true;
+#endif
 p2m_free_entry(p2m, orig_pte, level);
+    }

 out:
 unmap_domain_page(table);


But, if I got the review comments correctly [1], the 
qemu_mapcache_invalidate variable should be per-vcpu instead of per-domain?


[1] https://patchwork.kernel.org/patch/11803383/



--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-10-07 Thread Julien Grall

Hi Oleksandr,

On 02/10/2020 10:55, Oleksandr wrote:
If I got it correctly there won't be a suitable common place where to 
set qemu_mapcache_invalidate flag anymore
as XENMEM_decrease_reservation is not a single place we need to make a 
decision whether to set it
By principle of analogy, on Arm we probably want to do so in 
guest_physmap_remove_page (or maybe better in p2m_remove_mapping).

Julien, what do you think?


At the moment, the Arm code doesn't explicitely remove the existing 
mapping before inserting the new mapping. Instead, this is done 
implicitely by p2m_set_entry().


So I think we want to invalidate the QEMU mapcache in p2m_set_entry() if 
the old entry is a RAM page *and* the new MFN is different.


Cheers,

--
Julien Grall



Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-10-02 Thread Oleksandr



Hi Jan


On 25.09.20 16:05, Oleksandr wrote:


On 25.09.20 10:03, Jan Beulich wrote:

Hi Jan.


On 24.09.2020 18:45, Oleksandr wrote:

On 24.09.20 14:16, Jan Beulich wrote:

Hi Jan


On 22.09.2020 21:32, Oleksandr wrote:

On 16.09.20 11:50, Jan Beulich wrote:

On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)

    break;
    }
    +#ifdef CONFIG_IOREQ_SERVER
+    if ( op == XENMEM_decrease_reservation )
+    curr_d->qemu_mapcache_invalidate = true;
+#endif

I don't see why you put this right into decrease_reservation(). This
isn't just to avoid the extra conditional, but first and foremost to
avoid bypassing the earlier return from the function (in the case of
preemption). In the context of this I wonder whether the ordering of
operations in hvm_hypercall() is actually correct.
Good point, indeed we may return earlier in case of preemption, I 
missed

that.
Will move it to decrease_reservation(). But, we may return even 
earlier

in case of error...
Now I am wondering should we move it to the very beginning of command
processing or not?

In _this_ series I'd strongly recommend you keep things working as
they are. If independently you think you've found a reason to
re-order certain operations, then feel free to send a patch with
suitable justification.

Of course, I will try to retain current behavior.



I'm also unconvinced curr_d is the right domain in all cases here;
while this may be a pre-existing issue in principle, I'm afraid it
gets more pronounced by the logic getting moved to common code.

Sorry I didn't get your concern here.

Well, you need to be concerned whose qemu_mapcache_invalidate flag
you set.

May I ask, in what cases the *curr_d* is the right domain?

When a domain does a decrease-reservation on itself. I thought
that's obvious. But perhaps your question was rather meant a to
whether a->domain ever is _not_ the right one?

No, my question was about *curr_d*. I saw your answer
> I'm also unconvinced curr_d is the right domain in all cases here;
and just wanted to clarify these cases. Sorry if I was unclear.





We need to make sure that domain is using IOREQ server(s) at least.
Hopefully, we have a helper for this
which is hvm_domain_has_ioreq_server(). Please clarify, anything else I
should taking care of?

Nothing I can recall / think of right now, except that the change
may want to come under a different title and with a different
description.As indicated, I don't think this is correct for PVH
Dom0 issuing the request against a HVM DomU, and addressing this
will likely want this moved out of hvm_memory_op() anyway. Of
course an option is to split this into two patches - the proposed
bug fix (perhaps wanting backporting) and then the moving of the
field out of arch.hvm. If you feel uneasy about the bug fix part,
let me know and I (or maybe Roger) will see to put together a
patch.


Thank you for the clarification.

Yes, it would be really nice if you (or maybe Roger) could create a 
patch for the bug fix part.



Thank you for your patch [1].

If I got it correctly there won't be a suitable common place where to 
set qemu_mapcache_invalidate flag anymore
as XENMEM_decrease_reservation is not a single place we need to make a 
decision whether to set it
By principle of analogy, on Arm we probably want to do so in 
guest_physmap_remove_page (or maybe better in p2m_remove_mapping).

Julien, what do you think?


I will modify current patch to not alter the common code.


[1] https://patchwork.kernel.org/patch/11803383/

--

Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-25 Thread Oleksandr



On 25.09.20 10:03, Jan Beulich wrote:

Hi Jan.


On 24.09.2020 18:45, Oleksandr wrote:

On 24.09.20 14:16, Jan Beulich wrote:

Hi Jan


On 22.09.2020 21:32, Oleksandr wrote:

On 16.09.20 11:50, Jan Beulich wrote:

On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
break;
}

+#ifdef CONFIG_IOREQ_SERVER

+if ( op == XENMEM_decrease_reservation )
+curr_d->qemu_mapcache_invalidate = true;
+#endif

I don't see why you put this right into decrease_reservation(). This
isn't just to avoid the extra conditional, but first and foremost to
avoid bypassing the earlier return from the function (in the case of
preemption). In the context of this I wonder whether the ordering of
operations in hvm_hypercall() is actually correct.

Good point, indeed we may return earlier in case of preemption, I missed
that.
Will move it to decrease_reservation(). But, we may return even earlier
in case of error...
Now I am wondering should we move it to the very beginning of command
processing or not?

In _this_ series I'd strongly recommend you keep things working as
they are. If independently you think you've found a reason to
re-order certain operations, then feel free to send a patch with
suitable justification.

Of course, I will try to retain current behavior.



I'm also unconvinced curr_d is the right domain in all cases here;
while this may be a pre-existing issue in principle, I'm afraid it
gets more pronounced by the logic getting moved to common code.

Sorry I didn't get your concern here.

Well, you need to be concerned whose qemu_mapcache_invalidate flag
you set.

May I ask, in what cases the *curr_d* is the right domain?

When a domain does a decrease-reservation on itself. I thought
that's obvious. But perhaps your question was rather meant a to
whether a->domain ever is _not_ the right one?

No, my question was about *curr_d*. I saw your answer
> I'm also unconvinced curr_d is the right domain in all cases here;
and just wanted to clarify these cases. Sorry if I was unclear.





We need to make sure that domain is using IOREQ server(s) at least.
Hopefully, we have a helper for this
which is hvm_domain_has_ioreq_server(). Please clarify, anything else I
should taking care of?

Nothing I can recall / think of right now, except that the change
may want to come under a different title and with a different
description.As indicated, I don't think this is correct for PVH
Dom0 issuing the request against a HVM DomU, and addressing this
will likely want this moved out of hvm_memory_op() anyway. Of
course an option is to split this into two patches - the proposed
bug fix (perhaps wanting backporting) and then the moving of the
field out of arch.hvm. If you feel uneasy about the bug fix part,
let me know and I (or maybe Roger) will see to put together a
patch.


Thank you for the clarification.

Yes, it would be really nice if you (or maybe Roger) could create a 
patch for the bug fix part.


--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-25 Thread Jan Beulich
On 24.09.2020 18:45, Oleksandr wrote:
> 
> On 24.09.20 14:16, Jan Beulich wrote:
> 
> Hi Jan
> 
>> On 22.09.2020 21:32, Oleksandr wrote:
>>> On 16.09.20 11:50, Jan Beulich wrote:
 On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>break;
>}
>
> +#ifdef CONFIG_IOREQ_SERVER
> +if ( op == XENMEM_decrease_reservation )
> +curr_d->qemu_mapcache_invalidate = true;
> +#endif
 I don't see why you put this right into decrease_reservation(). This
 isn't just to avoid the extra conditional, but first and foremost to
 avoid bypassing the earlier return from the function (in the case of
 preemption). In the context of this I wonder whether the ordering of
 operations in hvm_hypercall() is actually correct.
>>> Good point, indeed we may return earlier in case of preemption, I missed
>>> that.
>>> Will move it to decrease_reservation(). But, we may return even earlier
>>> in case of error...
>>> Now I am wondering should we move it to the very beginning of command
>>> processing or not?
>> In _this_ series I'd strongly recommend you keep things working as
>> they are. If independently you think you've found a reason to
>> re-order certain operations, then feel free to send a patch with
>> suitable justification.
> 
> Of course, I will try to retain current behavior.
> 
> 
 I'm also unconvinced curr_d is the right domain in all cases here;
 while this may be a pre-existing issue in principle, I'm afraid it
 gets more pronounced by the logic getting moved to common code.
>>> Sorry I didn't get your concern here.
>> Well, you need to be concerned whose qemu_mapcache_invalidate flag
>> you set.
> May I ask, in what cases the *curr_d* is the right domain?

When a domain does a decrease-reservation on itself. I thought
that's obvious. But perhaps your question was rather meant a to
whether a->domain ever is _not_ the right one?

> We need to make sure that domain is using IOREQ server(s) at least. 
> Hopefully, we have a helper for this
> which is hvm_domain_has_ioreq_server(). Please clarify, anything else I 
> should taking care of?

Nothing I can recall / think of right now, except that the change
may want to come under a different title and with a different
description. As indicated, I don't think this is correct for PVH
Dom0 issuing the request against a HVM DomU, and addressing this
will likely want this moved out of hvm_memory_op() anyway. Of
course an option is to split this into two patches - the proposed
bug fix (perhaps wanting backporting) and then the moving of the
field out of arch.hvm. If you feel uneasy about the bug fix part,
let me know and I (or maybe Roger) will see to put together a
patch.

Jan



Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-24 Thread Oleksandr



On 24.09.20 14:16, Jan Beulich wrote:

Hi Jan


On 22.09.2020 21:32, Oleksandr wrote:

On 16.09.20 11:50, Jan Beulich wrote:

On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
   break;
   }
   
+#ifdef CONFIG_IOREQ_SERVER

+if ( op == XENMEM_decrease_reservation )
+curr_d->qemu_mapcache_invalidate = true;
+#endif

I don't see why you put this right into decrease_reservation(). This
isn't just to avoid the extra conditional, but first and foremost to
avoid bypassing the earlier return from the function (in the case of
preemption). In the context of this I wonder whether the ordering of
operations in hvm_hypercall() is actually correct.

Good point, indeed we may return earlier in case of preemption, I missed
that.
Will move it to decrease_reservation(). But, we may return even earlier
in case of error...
Now I am wondering should we move it to the very beginning of command
processing or not?

In _this_ series I'd strongly recommend you keep things working as
they are. If independently you think you've found a reason to
re-order certain operations, then feel free to send a patch with
suitable justification.


Of course, I will try to retain current behavior.



I'm also unconvinced curr_d is the right domain in all cases here;
while this may be a pre-existing issue in principle, I'm afraid it
gets more pronounced by the logic getting moved to common code.

Sorry I didn't get your concern here.

Well, you need to be concerned whose qemu_mapcache_invalidate flag
you set.

May I ask, in what cases the *curr_d* is the right domain?

We need to make sure that domain is using IOREQ server(s) at least. 
Hopefully, we have a helper for this
which is hvm_domain_has_ioreq_server(). Please clarify, anything else I 
should taking care of?



--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-24 Thread Jan Beulich
On 22.09.2020 21:32, Oleksandr wrote:
> On 16.09.20 11:50, Jan Beulich wrote:
>> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>   break;
>>>   }
>>>   
>>> +#ifdef CONFIG_IOREQ_SERVER
>>> +if ( op == XENMEM_decrease_reservation )
>>> +curr_d->qemu_mapcache_invalidate = true;
>>> +#endif
>> I don't see why you put this right into decrease_reservation(). This
>> isn't just to avoid the extra conditional, but first and foremost to
>> avoid bypassing the earlier return from the function (in the case of
>> preemption). In the context of this I wonder whether the ordering of
>> operations in hvm_hypercall() is actually correct.
> Good point, indeed we may return earlier in case of preemption, I missed 
> that.
> Will move it to decrease_reservation(). But, we may return even earlier 
> in case of error...
> Now I am wondering should we move it to the very beginning of command 
> processing or not?

In _this_ series I'd strongly recommend you keep things working as
they are. If independently you think you've found a reason to
re-order certain operations, then feel free to send a patch with
suitable justification.

>> I'm also unconvinced curr_d is the right domain in all cases here;
>> while this may be a pre-existing issue in principle, I'm afraid it
>> gets more pronounced by the logic getting moved to common code.
> 
> Sorry I didn't get your concern here.

Well, you need to be concerned whose qemu_mapcache_invalidate flag
you set.

Jan



Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-22 Thread Oleksandr



On 16.09.20 11:50, Jan Beulich wrote:

Hi Jan, Roger


On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:

--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1490,6 +1490,12 @@ static void do_trap_hypercall(struct cpu_user_regs 
*regs, register_t *nr,
  /* Ensure the hypercall trap instruction is re-executed. */
  if ( current->hcall_preempted )
  regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
+
+#ifdef CONFIG_IOREQ_SERVER
+if ( unlikely(current->domain->qemu_mapcache_invalidate) &&
+ test_and_clear_bool(current->domain->qemu_mapcache_invalidate) )
+send_invalidate_req();
+#endif
  }

There's a lot of uses of "current" here now, and these don't look to
exactly be cheap on Arm either (they aren't on x86), so I wonder
whether this is the point where at least "current" wants latching
into a local variable here.


Sounds reasonable, will use local variable





--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -18,8 +18,10 @@
   *
   * Copyright (c) 2017 Citrix Systems Ltd.
   */
+
  #include 
  #include 
+#include 
  #include 

While I don't care much about the presence of absence of the blank
line between head comment and #include-s, I don't see why you add
one here.


Accidentally), will remove.





--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
  break;
  }
  
+#ifdef CONFIG_IOREQ_SERVER

+if ( op == XENMEM_decrease_reservation )
+curr_d->qemu_mapcache_invalidate = true;
+#endif

I don't see why you put this right into decrease_reservation(). This
isn't just to avoid the extra conditional, but first and foremost to
avoid bypassing the earlier return from the function (in the case of
preemption). In the context of this I wonder whether the ordering of
operations in hvm_hypercall() is actually correct.
Good point, indeed we may return earlier in case of preemption, I missed 
that.
Will move it to decrease_reservation(). But, we may return even earlier 
in case of error...
Now I am wondering should we move it to the very beginning of command 
processing or not?
AFAIU before this patch qemu_mapcache_invalidate was always set in 
hvm_memory_op() if XENMEM_decrease_reservation came

despite of possible errors in the command processing.



I'm also unconvinced curr_d is the right domain in all cases here;
while this may be a pre-existing issue in principle, I'm afraid it
gets more pronounced by the logic getting moved to common code.


Sorry I didn't get your concern here.



Roger - thoughts either way with, in particular, PVH Dom0 in mind?



--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -97,6 +97,8 @@ static inline bool hvm_ioreq_needs_completion(const ioreq_t 
*ioreq)
 (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
  }
  
+void send_invalidate_req(void);

Perhaps rename to ioreq_send_invalidate(), ioreq_send_invalidate_req(),
or send_invalidate_ioreq() at this occasion?


I would prefer function with ioreq_ prefix.





--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -512,6 +512,8 @@ struct domain
  /* Argo interdomain communication support */
  struct argo_domain *argo;
  #endif
+
+bool_t qemu_mapcache_invalidate;

"bool" please.


ok

--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-16 Thread Jan Beulich
On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1490,6 +1490,12 @@ static void do_trap_hypercall(struct cpu_user_regs 
> *regs, register_t *nr,
>  /* Ensure the hypercall trap instruction is re-executed. */
>  if ( current->hcall_preempted )
>  regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
> +
> +#ifdef CONFIG_IOREQ_SERVER
> +if ( unlikely(current->domain->qemu_mapcache_invalidate) &&
> + test_and_clear_bool(current->domain->qemu_mapcache_invalidate) )
> +send_invalidate_req();
> +#endif
>  }

There's a lot of uses of "current" here now, and these don't look to
exactly be cheap on Arm either (they aren't on x86), so I wonder
whether this is the point where at least "current" wants latching
into a local variable here.

> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -18,8 +18,10 @@
>   *
>   * Copyright (c) 2017 Citrix Systems Ltd.
>   */
> +
>  #include 
>  #include 
> +#include 
>  #include 

While I don't care much about the presence of absence of the blank
line between head comment and #include-s, I don't see why you add
one here.

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>  break;
>  }
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +if ( op == XENMEM_decrease_reservation )
> +curr_d->qemu_mapcache_invalidate = true;
> +#endif

I don't see why you put this right into decrease_reservation(). This
isn't just to avoid the extra conditional, but first and foremost to
avoid bypassing the earlier return from the function (in the case of
preemption). In the context of this I wonder whether the ordering of
operations in hvm_hypercall() is actually correct.

I'm also unconvinced curr_d is the right domain in all cases here;
while this may be a pre-existing issue in principle, I'm afraid it
gets more pronounced by the logic getting moved to common code.
Roger - thoughts either way with, in particular, PVH Dom0 in mind?

> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -97,6 +97,8 @@ static inline bool hvm_ioreq_needs_completion(const ioreq_t 
> *ioreq)
> (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
>  }
>  
> +void send_invalidate_req(void);

Perhaps rename to ioreq_send_invalidate(), ioreq_send_invalidate_req(),
or send_invalidate_ioreq() at this occasion?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -512,6 +512,8 @@ struct domain
>  /* Argo interdomain communication support */
>  struct argo_domain *argo;
>  #endif
> +
> +bool_t qemu_mapcache_invalidate;

"bool" please.

Jan



[PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common

2020-09-10 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

As the IOREQ is a common feature now and we also need to
invalidate qemu mapcache on XENMEM_decrease_reservation on Arm
this patch moves this handling to the common code and move per-domain
qemu_mapcache_invalidate variable out of the arch sub-struct.

Signed-off-by: Julien Grall 
Signed-off-by: Oleksandr Tyshchenko 

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - move send_invalidate_req() to the common code
   - update patch subject/description
   - move qemu_mapcache_invalidate out of the arch sub-struct,
 update checks
   - remove #if defined(CONFIG_ARM64) from the common code
---
---
 xen/arch/arm/traps.c |  6 ++
 xen/arch/x86/hvm/hypercall.c |  9 -
 xen/arch/x86/hvm/io.c| 14 --
 xen/common/ioreq.c   | 14 ++
 xen/common/memory.c  |  5 +
 xen/include/asm-x86/hvm/domain.h |  1 -
 xen/include/asm-x86/hvm/io.h |  1 -
 xen/include/xen/ioreq.h  |  2 ++
 xen/include/xen/sched.h  |  2 ++
 9 files changed, 33 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 6b37ae1..de48b2f 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1490,6 +1490,12 @@ static void do_trap_hypercall(struct cpu_user_regs 
*regs, register_t *nr,
 /* Ensure the hypercall trap instruction is re-executed. */
 if ( current->hcall_preempted )
 regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
+
+#ifdef CONFIG_IOREQ_SERVER
+if ( unlikely(current->domain->qemu_mapcache_invalidate) &&
+ test_and_clear_bool(current->domain->qemu_mapcache_invalidate) )
+send_invalidate_req();
+#endif
 }
 
 void arch_hypercall_tasklet_result(struct vcpu *v, long res)
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index b6ccaf4..45fc20b 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -18,8 +18,10 @@
  *
  * Copyright (c) 2017 Citrix Systems Ltd.
  */
+
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -46,9 +48,6 @@ static long hvm_memory_op(int cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 else
 rc = compat_memory_op(cmd, arg);
 
-if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
-curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
-
 return rc;
 }
 
@@ -329,8 +328,8 @@ int hvm_hypercall(struct cpu_user_regs *regs)
 if ( curr->hcall_preempted )
 return HVM_HCALL_preempted;
 
-if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
- test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
+if ( unlikely(currd->qemu_mapcache_invalidate) &&
+ test_and_clear_bool(currd->qemu_mapcache_invalidate) )
 send_invalidate_req();
 
 return HVM_HCALL_completed;
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 14f8c89..e659a53 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff)
 gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n");
 }
 
-/* Ask ioemu mapcache to invalidate mappings. */
-void send_invalidate_req(void)
-{
-ioreq_t p = {
-.type = IOREQ_TYPE_INVALIDATE,
-.size = 4,
-.dir = IOREQ_WRITE,
-.data = ~0UL, /* flush all */
-};
-
-if ( hvm_broadcast_ioreq(&p, false) != 0 )
-gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
-}
-
 bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
 struct hvm_emulate_ctxt ctxt;
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 4c3a835..e24a481 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -34,6 +34,20 @@
 #include 
 #include 
 
+/* Ask ioemu mapcache to invalidate mappings. */
+void send_invalidate_req(void)
+{
+ioreq_t p = {
+.type = IOREQ_TYPE_INVALIDATE,
+.size = 4,
+.dir = IOREQ_WRITE,
+.data = ~0UL, /* flush all */
+};
+
+if ( hvm_broadcast_ioreq(&p, false) != 0 )
+gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
+}
+
 static void set_ioreq_server(struct domain *d, unsigned int id,
  struct hvm_ioreq_server *s)
 {
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 78781f1..9d98252 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 }
 
+#ifdef CONFIG_IOREQ_SERVER
+if ( op == XENMEM_decrease_reservation )
+curr_d->qemu_mapcache_invalidate = true;
+#endif
+
 return rc;
 }
 
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 79e0afb..11d5cc1 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -131,7 +131,6 @