[Xen-devel] Unikernels for xen ?

2018-04-21 Thread Ajay Garg
Hi All.

Just a quick question ...

We know that rumprun and unikraft are the two major unikernels with xen
support.

https://wiki.xenproject.org/wiki/Unikernels mention a lot of unikernels.
Also, we intend to have python-applications running over xen. So, I guess
Ocaml/Go/Erlang/JS based unikernels are out of the picture.

In summary, any unikernels other than rumprun/unikraft that may facilitate
running python-applications in xen-context (a C/C++ unikernel is fine, as
porting  python to such c/c++ based unikernel ought to be feasible).


Thanks and Regards,
Ajay
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Bug 198497] handle_mm_fault / xen_pmd_val / radix_tree_lookup_slot Null pointer

2018-04-21 Thread Juergen Gross
On 21/04/18 16:35, Matthew Wilcox wrote:
> On Fri, Apr 20, 2018 at 10:02:29AM -0600, Jan Beulich wrote:
>> Skylake 32bit PAE Dom0:
>> Bad swp_entry: 8000
>> mm/swap_state.c:683: bad pte d3a39f1c(8004)
>>
>> Ivy Bridge 32bit PAE Dom0:
>> Bad swp_entry: 4000
>> mm/swap_state.c:683: bad pte d3a05f1c(8002)
>>
>> Other 32bit DomU:
>> Bad swp_entry: 400
>> mm/swap_state.c:683: bad pte e2187f30(8002)
>>
>> Other 32bit:
>> Bad swp_entry: 200
>> mm/swap_state.c:683: bad pte ef3a3f38(8001)
> 
>> As said in my previous reply - both of the bits Andrew has mentioned can
>> only ever be set when the present bit is also set (which doesn't appear to
>> be the case here). The set bits above are actually in the range of bits
>> designated to the address, which Xen wouldn't ever play with.
> 
> Is it relevant that all the crashes we've seen are with PAE in the guest?
> Is it possible that Xen thinks the guest is not using PAE?
> 

All Xen 32-bit PV guests are using PAE. Its part of the PV ABI.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Bug 198497] handle_mm_fault / xen_pmd_val / radix_tree_lookup_slot Null pointer

2018-04-21 Thread Matthew Wilcox
On Fri, Apr 20, 2018 at 10:02:29AM -0600, Jan Beulich wrote:
>  Skylake 32bit PAE Dom0:
>  Bad swp_entry: 8000
>  mm/swap_state.c:683: bad pte d3a39f1c(8004)
> 
>  Ivy Bridge 32bit PAE Dom0:
>  Bad swp_entry: 4000
>  mm/swap_state.c:683: bad pte d3a05f1c(8002)
> 
>  Other 32bit DomU:
>  Bad swp_entry: 400
>  mm/swap_state.c:683: bad pte e2187f30(8002)
> 
>  Other 32bit:
>  Bad swp_entry: 200
>  mm/swap_state.c:683: bad pte ef3a3f38(8001)

> As said in my previous reply - both of the bits Andrew has mentioned can
> only ever be set when the present bit is also set (which doesn't appear to
> be the case here). The set bits above are actually in the range of bits
> designated to the address, which Xen wouldn't ever play with.

Is it relevant that all the crashes we've seen are with PAE in the guest?
Is it possible that Xen thinks the guest is not using PAE?

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v8 1/9] x86/xpti: avoid copying L4 page table contents when possible

2018-04-21 Thread Juergen Gross
On 21/04/18 15:32, Tim Deegan wrote:
> Hi,
> 
> At 09:44 +0200 on 19 Apr (1524131080), Juergen Gross wrote:
 So either I'm adding some kind of locking/rcu, or I'm switching to use
 IPIs and access root_pgt_changed only locally.

 Do you have any preference?
>>>
>>> Since issuing an IPI is just a single call, I'd prefer not to have new 
>>> (locking,
>>> rcu, or whatever else) logic added here. Unless of course someone, in
>>> particular Tim, thinks sending an IPI here is a rather bad idea.
> 
> AFAICS you're calling this from shadow code whenever it changes an
> L4e, so I'd rather not have an IPI here if we don't need it.
> 
>> Another alternative would be to pass another flag to the callers to
>> signal the need for a flush. This would require quite some modifications
>> to shadow code I'd like to avoid, though. OTOH this way we could combine
>> flushing the tlb and the root page tables. Tim, any preferences?
> 
> This sounds a promising direction but it should be doabl without major
> surgery to the shadow code.  The shadow code already leaves old sl4es
> visible (in TLBs) when it's safe to do so, so I think the right place
> to hook this is on the receiving side of the TLB flush IPI.  IOW as
> long as:
>  - you copy the L4 on context switch; and
>  - you copy it on the TLB flush IPI is received
> then you can rely on the existing TLB flush mechanisms to do what you need.
> And shadow doesn't have to behave differently from 'normal' PV MM.

It is not so easy. The problem is that e.g. a page fault will flush the
TLB entry for the page in question, but it won't lead to the L4 to be
copied. Additionally a new introduced page resulting in a new L4 entry
would possibly never result in picking up the new L4 shadow entry as a
TLB flush wouldn't be necessary and this would result in an endless
stream of page faults.

I tried that approach be just doing a L4 copy in case of a TLB flush IPI
on all affected cpus but this wouldn't be enough.

So what I'd need to do is to set a new flag when writing a L4 entry and
carry it up to the point where the TLB flush is done to add a L4 copy.
All the places which won't do a TLB flush today will have to add one
for the L4 copy.

> Do you think it needs more (in particular to avoid the L4 copy on TLB
> flushes?)  Would a per-domain flag be good enough if per-vcpu is
> difficult?


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v8 1/9] x86/xpti: avoid copying L4 page table contents when possible

2018-04-21 Thread Tim Deegan
Hi,

At 09:44 +0200 on 19 Apr (1524131080), Juergen Gross wrote:
> >> So either I'm adding some kind of locking/rcu, or I'm switching to use
> >> IPIs and access root_pgt_changed only locally.
> >>
> >> Do you have any preference?
> > 
> > Since issuing an IPI is just a single call, I'd prefer not to have new 
> > (locking,
> > rcu, or whatever else) logic added here. Unless of course someone, in
> > particular Tim, thinks sending an IPI here is a rather bad idea.

AFAICS you're calling this from shadow code whenever it changes an
L4e, so I'd rather not have an IPI here if we don't need it.

> Another alternative would be to pass another flag to the callers to
> signal the need for a flush. This would require quite some modifications
> to shadow code I'd like to avoid, though. OTOH this way we could combine
> flushing the tlb and the root page tables. Tim, any preferences?

This sounds a promising direction but it should be doabl without major
surgery to the shadow code.  The shadow code already leaves old sl4es
visible (in TLBs) when it's safe to do so, so I think the right place
to hook this is on the receiving side of the TLB flush IPI.  IOW as
long as:
 - you copy the L4 on context switch; and
 - you copy it on the TLB flush IPI is received
then you can rely on the existing TLB flush mechanisms to do what you need.
And shadow doesn't have to behave differently from 'normal' PV MM.

Do you think it needs more (in particular to avoid the L4 copy on TLB
flushes?)  Would a per-domain flag be good enough if per-vcpu is
difficult?

Tim.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [distros-debian-stretch test] 74635: tolerable FAIL

2018-04-21 Thread Platform Team regression test user
flight 74635 distros-debian-stretch real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/74635/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-armhf-stretch-netboot-pygrub 10 debian-di-install fail 
blocked in 74606
 test-amd64-amd64-i386-stretch-netboot-pygrub 10 debian-di-install fail blocked 
in 74606
 test-amd64-amd64-amd64-stretch-netboot-pvgrub 10 debian-di-install fail 
blocked in 74606
 test-amd64-i386-amd64-stretch-netboot-pygrub 10 debian-di-install fail blocked 
in 74606
 test-amd64-i386-i386-stretch-netboot-pvgrub 10 debian-di-install fail blocked 
in 74606

baseline version:
 flight   74606

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-stretch-netboot-pvgrubfail
 test-amd64-i386-i386-stretch-netboot-pvgrub  fail
 test-amd64-i386-amd64-stretch-netboot-pygrub fail
 test-armhf-armhf-armhf-stretch-netboot-pygrubfail
 test-amd64-amd64-i386-stretch-netboot-pygrub fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel