Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Dr. David Alan Gilbert
* Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> > * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > > * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > > > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > > > this setup.
> > > > 
> > > > Interesting - I'd not got that far myself on power; I was hitting a 
> > > > problem
> > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in 
> > > > htab stream (htab_shift=25) )
> > > > 
> > > > Did you have to make any changes to the qemu code to get that happy?
> > > 
> > > I should have mentioned that I tried only QEMU driven migration within
> > > the same host using wp3-postcopy branch of your tree. I don't see the
> > > above issue.
> > > 
> > > (qemu) info migrate
> > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off 
> > > zero-blocks: off compress: off x-postcopy-ram: on 
> > > Migration status: completed
> > > total time: 39432 milliseconds
> > > downtime: 162 milliseconds
> > > setup: 14 milliseconds
> > > transferred ram: 1297209 kbytes
> > > throughput: 270.72 mbps
> > > remaining ram: 0 kbytes
> > > total ram: 4194560 kbytes
> > > duplicate: 734015 pages
> > > skipped: 0 pages
> > > normal: 318469 pages
> > > normal bytes: 1273876 kbytes
> > > dirty sync count: 4
> > > 
> > > I will try migration between different hosts soon and check.
> > 
> > I hit that on the same host; are you sure you've switched into postcopy 
> > mode;
> > i.e. issued a migrate_start_postcopy before the end of migration?
> 
> Sorry I was following your discussion with Li in this thread
> 
> https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4
> 
> and it wasn't obvious to me that anything apart from turning on the
> x-postcopy-ram capability was required :(

OK.

> So I do see the problem now.
> 
> At the source
> -
> Error reading data from KVM HTAB fd: Bad file descriptor
> Segmentation fault
> 
> At the target
> -
> htab_load() bad index 2113929216 (14336+0 entries) in htab stream 
> (htab_shift=25)
> qemu-system-ppc64: error while loading state section id 56(spapr/htab)
> qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22
> qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host 
> index 0x1f: delta 0xffe1
> qemu-system-ppc64: error while loading state for instance 0x0 of device 
> 'pci@8002000:00.0/virtio-net'
> *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked 
> list: 0x0100241234a0 ***
> === Backtrace: =
> /lib64/power8/libc.so.6Segmentation fault

Good - my current world has got rid of the segfaults/corruption in the cleanup 
on power - but those
are only after it stumbled over the htab problem.

I don't know the innards of power/htab, so if you've got any pointers on what 
upset it
I'd be happy for some pointers.

(We should probably trim the cc - since I don't think this is userfault 
related).

Dave

--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Dr. David Alan Gilbert
* Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > this setup.
> > 
> > Interesting - I'd not got that far myself on power; I was hitting a problem
> > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab 
> > stream (htab_shift=25) )
> > 
> > Did you have to make any changes to the qemu code to get that happy?
> 
> I should have mentioned that I tried only QEMU driven migration within
> the same host using wp3-postcopy branch of your tree. I don't see the
> above issue.
> 
> (qemu) info migrate
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: 
> off compress: off x-postcopy-ram: on 
> Migration status: completed
> total time: 39432 milliseconds
> downtime: 162 milliseconds
> setup: 14 milliseconds
> transferred ram: 1297209 kbytes
> throughput: 270.72 mbps
> remaining ram: 0 kbytes
> total ram: 4194560 kbytes
> duplicate: 734015 pages
> skipped: 0 pages
> normal: 318469 pages
> normal bytes: 1273876 kbytes
> dirty sync count: 4
> 
> I will try migration between different hosts soon and check.

I hit that on the same host; are you sure you've switched into postcopy mode;
i.e. issued a migrate_start_postcopy before the end of migration?

(My current world I have working on x86-64 and I've also tested reasonably well
on aarch64, and get to that htab problem on Power).

Dave

> 
> Regards,
> Bharata.
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Dr. David Alan Gilbert
* Michael Ellerman (m...@ellerman.id.au) wrote:
> On Tue, 2015-09-08 at 17:14 +1000, Michael Ellerman wrote:
> > On Tue, 2015-09-08 at 12:09 +0530, Bharata B Rao wrote:
> > > On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> > > > Hmm, not for me. See below.
> > > > 
> > > > What setup were you testing on Bharata?
> > > 
> > > I was on commit a94572f5799dd of userfault21 branch in Andrea's tree
> > > git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git
> > > 
> > > #uname -a
> > > Linux 4.1.0-rc8+ #1 SMP Tue Aug 11 11:33:50 IST 2015 ppc64le ppc64le 
> > > ppc64le GNU/Linux
> > > 
> > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > this setup.
> > 
> > OK, do you mind testing mainline with the same setup to see if the selftest
> > passes.
> 
> Ah, I just tried it on big endian and it works. So it seems to not work on
> little endian for some reason, /probably/ a test case bug?

Hmm; I think we're missing a test-case fix that Andrea made me for a bug I hit 
on Power
I hit a couple of weeks back.  I think that would have been on le.

Dave

> cheers
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Dr. David Alan Gilbert
* Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> > On Wed, 2015-08-12 at 10:53 +0530, Bharata B Rao wrote:
> > > On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> > > > Hello Bharata,
> > > > 
> > > > On Tue, Aug 11, 2015 at 03:37:29PM +0530, Bharata B Rao wrote:
> > > > > May be it is a bit late to bring this up, but I needed the following 
> > > > > fix
> > > > > to userfault21 branch of your git tree to compile on powerpc.
> > > > 
> > > > Not late, just in time. I increased the number of syscalls in earlier
> > > > versions, it must have gotten lost during a rejecting rebase, sorry.
> > > > 
> > > > I applied it to my tree and it can be applied to -mm and linux-next,
> > > > thanks!
> > > > 
> > > > The syscall for arm32 are also ready and on their way to the arm tree,
> > > > the testsuite worked fine there. ppc also should work fine if you
> > > > could confirm it'd be interesting, just beware that I got a typo in
> > > > the testcase:
> > > 
> > > The testsuite passes on powerpc.
> > > 
> > > 
> > > running userfaultfd
> > > 
> > > nr_pages: 2040, nr_pages_per_cpu: 170
> > > bounces: 31, mode: rnd racing ver poll, userfaults: 80 43 23 23 15 16 12 
> > > 1 2 96 13 128
> > > bounces: 30, mode: racing ver poll, userfaults: 35 54 62 49 47 48 2 8 0 
> > > 78 1 0
> > > bounces: 29, mode: rnd ver poll, userfaults: 114 153 70 106 78 57 143 92 
> > > 114 96 1 0
> > > bounces: 28, mode: ver poll, userfaults: 96 81 5 45 83 19 98 28 1 145 23 2
> > > bounces: 27, mode: rnd racing poll, userfaults: 54 65 60 54 45 49 1 2 1 2 
> > > 71 20
> > > bounces: 26, mode: racing poll, userfaults: 90 83 35 29 37 35 30 42 3 4 
> > > 49 6
> > > bounces: 25, mode: rnd poll, userfaults: 52 50 178 112 51 41 23 42 18 99 
> > > 59 0
> > > bounces: 24, mode: poll, userfaults: 136 101 83 260 84 29 16 88 1 6 160 57
> > > bounces: 23, mode: rnd racing ver, userfaults: 141 197 158 183 39 49 3 52 
> > > 8 3 6 0
> > > bounces: 22, mode: racing ver, userfaults: 242 266 244 180 162 32 87 43 
> > > 31 40 34 0
> > > bounces: 21, mode: rnd ver, userfaults: 636 158 175 24 253 104 48 8 0 0 0 > > > 0
> > > bounces: 20, mode: ver, userfaults: 531 204 225 117 129 107 11 143 76 31 
> > > 1 0
> > > bounces: 19, mode: rnd racing, userfaults: 303 169 225 145 59 219 37 0 0 
> > > 0 0 0
> > > bounces: 18, mode: racing, userfaults: 374 372 37 144 126 90 25 12 15 17 
> > > 0 0
> > > bounces: 17, mode: rnd, userfaults: 313 412 134 108 80 99 7 56 85 0 0 0
> > > bounces: 16, mode:, userfaults: 431 58 87 167 120 113 98 60 14 8 48 0
> > > bounces: 15, mode: rnd racing ver poll, userfaults: 41 40 25 28 37 24 0 0 
> > > 0 0 180 75
> > > bounces: 14, mode: racing ver poll, userfaults: 43 53 30 28 25 15 19 0 0 
> > > 0 0 30
> > > bounces: 13, mode: rnd ver poll, userfaults: 136 91 114 91 92 79 114 77 
> > > 75 68 1 2
> > > bounces: 12, mode: ver poll, userfaults: 92 120 114 76 153 75 132 157 83 
> > > 81 10 1
> > > bounces: 11, mode: rnd racing poll, userfaults: 50 72 69 52 53 48 46 59 
> > > 57 51 37 1
> > > bounces: 10, mode: racing poll, userfaults: 33 49 38 68 35 63 57 49 49 47 
> > > 25 10
> > > bounces: 9, mode: rnd poll, userfaults: 167 150 67 123 39 75 1 2 9 125 1 1
> > > bounces: 8, mode: poll, userfaults: 147 102 20 87 5 27 118 14 104 40 21 28
> > > bounces: 7, mode: rnd racing ver, userfaults: 305 254 208 74 59 96 36 14 
> > > 11 7 4 5
> > > bounces: 6, mode: racing ver, userfaults: 290 114 191 94 162 114 34 6 6 
> > > 32 23 2
> > > bounces: 5, mode: rnd ver, userfaults: 370 381 22 273 21 106 17 55 0 0 0 0
> > > bounces: 4, mode: ver, userfaults: 328 279 179 191 74 86 95 15 13 10 0 0
> > > bounces: 3, mode: rnd racing, userfaults: 222 215 164 70 5 20 179 0 34 3 
> > > 0 0
> > > bounces: 2, mode: racing, userfaults: 316 385 112 160 225 5 30 49 42 2 4 0
> > > bounces: 1, mode: rnd, userfaults: 273 139 253 176 163 71 85 2 0 0 0 0
> > > bounces: 0, mode:, userfaults: 165 212 633 13 24 66 24 27 15 0 10 1
> > > [PASS]
> > 
> > Hmm, not for me. See below.
> > 
> > What setup were you testing on Bharata?
> 
> I was on commit a94572f5799dd of userfault21 branch in Andrea's tree
> git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git
> 
> #uname -a
> Linux 4.1.0-rc8+ #1 SMP Tue Aug 11 11:33:50 IST 2015 ppc64le ppc64le ppc64le 
> GNU/Linux
> 
> In fact I had successfully done postcopy migration of sPAPR guest with
> this setup.

Interesting - I'd not got that far myself on power; I was hitting a problem
loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab 
stream (htab_shift=25) )

Did you have to make any changes to the qemu code to get that happy?

Dave

> > 
> > Mine is:
> > 
> > $ uname -a
> > Linux lebuntu 4.2.0-09705-g3a166acc1432 #2 SMP Tue Sep 8 15:18:00 AEST 2015 
> > ppc64le ppc64le ppc64le GNU/Linux
> > 
> > Which is 7d9071a09502 plus a couple of powerpc patches.
> > 
> > $ zgrep USERFAULTFD /proc/c

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Bharata B Rao
On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > > this setup.
> > > 
> > > Interesting - I'd not got that far myself on power; I was hitting a 
> > > problem
> > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab 
> > > stream (htab_shift=25) )
> > > 
> > > Did you have to make any changes to the qemu code to get that happy?
> > 
> > I should have mentioned that I tried only QEMU driven migration within
> > the same host using wp3-postcopy branch of your tree. I don't see the
> > above issue.
> > 
> > (qemu) info migrate
> > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: 
> > off compress: off x-postcopy-ram: on 
> > Migration status: completed
> > total time: 39432 milliseconds
> > downtime: 162 milliseconds
> > setup: 14 milliseconds
> > transferred ram: 1297209 kbytes
> > throughput: 270.72 mbps
> > remaining ram: 0 kbytes
> > total ram: 4194560 kbytes
> > duplicate: 734015 pages
> > skipped: 0 pages
> > normal: 318469 pages
> > normal bytes: 1273876 kbytes
> > dirty sync count: 4
> > 
> > I will try migration between different hosts soon and check.
> 
> I hit that on the same host; are you sure you've switched into postcopy mode;
> i.e. issued a migrate_start_postcopy before the end of migration?

Sorry I was following your discussion with Li in this thread

https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4

and it wasn't obvious to me that anything apart from turning on the
x-postcopy-ram capability was required :(

So I do see the problem now.

At the source
-
Error reading data from KVM HTAB fd: Bad file descriptor
Segmentation fault

At the target
-
htab_load() bad index 2113929216 (14336+0 entries) in htab stream 
(htab_shift=25)
qemu-system-ppc64: error while loading state section id 56(spapr/htab)
qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22
qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host index 
0x1f: delta 0xffe1
qemu-system-ppc64: error while loading state for instance 0x0 of device 
'pci@8002000:00.0/virtio-net'
*** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked list: 
0x0100241234a0 ***
=== Backtrace: =
/lib64/power8/libc.so.6Segmentation fault

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Michael Ellerman
On Tue, 2015-09-08 at 17:14 +1000, Michael Ellerman wrote:
> On Tue, 2015-09-08 at 12:09 +0530, Bharata B Rao wrote:
> > On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> > > Hmm, not for me. See below.
> > > 
> > > What setup were you testing on Bharata?
> > 
> > I was on commit a94572f5799dd of userfault21 branch in Andrea's tree
> > git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git
> > 
> > #uname -a
> > Linux 4.1.0-rc8+ #1 SMP Tue Aug 11 11:33:50 IST 2015 ppc64le ppc64le 
> > ppc64le GNU/Linux
> > 
> > In fact I had successfully done postcopy migration of sPAPR guest with
> > this setup.
> 
> OK, do you mind testing mainline with the same setup to see if the selftest
> passes.

Ah, I just tried it on big endian and it works. So it seems to not work on
little endian for some reason, /probably/ a test case bug?

cheers


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Bharata B Rao
On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bhar...@linux.vnet.ibm.com) wrote:
> > In fact I had successfully done postcopy migration of sPAPR guest with
> > this setup.
> 
> Interesting - I'd not got that far myself on power; I was hitting a problem
> loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab 
> stream (htab_shift=25) )
> 
> Did you have to make any changes to the qemu code to get that happy?

I should have mentioned that I tried only QEMU driven migration within
the same host using wp3-postcopy branch of your tree. I don't see the
above issue.

(qemu) info migrate
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off 
compress: off x-postcopy-ram: on 
Migration status: completed
total time: 39432 milliseconds
downtime: 162 milliseconds
setup: 14 milliseconds
transferred ram: 1297209 kbytes
throughput: 270.72 mbps
remaining ram: 0 kbytes
total ram: 4194560 kbytes
duplicate: 734015 pages
skipped: 0 pages
normal: 318469 pages
normal bytes: 1273876 kbytes
dirty sync count: 4

I will try migration between different hosts soon and check.

Regards,
Bharata.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-08 Thread Michael Ellerman
On Tue, 2015-09-08 at 12:09 +0530, Bharata B Rao wrote:
> On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> > On Wed, 2015-08-12 at 10:53 +0530, Bharata B Rao wrote:
> > > On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> > > > Hello Bharata,
> > > > 
> > > > On Tue, Aug 11, 2015 at 03:37:29PM +0530, Bharata B Rao wrote:
> > > > > May be it is a bit late to bring this up, but I needed the following 
> > > > > fix
> > > > > to userfault21 branch of your git tree to compile on powerpc.
> > > > 
> > > > Not late, just in time. I increased the number of syscalls in earlier
> > > > versions, it must have gotten lost during a rejecting rebase, sorry.
> > > > 
> > > > I applied it to my tree and it can be applied to -mm and linux-next,
> > > > thanks!
> > > > 
> > > > The syscall for arm32 are also ready and on their way to the arm tree,
> > > > the testsuite worked fine there. ppc also should work fine if you
> > > > could confirm it'd be interesting, just beware that I got a typo in
> > > > the testcase:
> > > 
> > > The testsuite passes on powerpc.
> > > 
> > > 
> > > running userfaultfd
> > > 
> > > nr_pages: 2040, nr_pages_per_cpu: 170
> > > bounces: 31, mode: rnd racing ver poll, userfaults: 80 43 23 23 15 16 12 
> > > 1 2 96 13 128
> > > bounces: 30, mode: racing ver poll, userfaults: 35 54 62 49 47 48 2 8 0 
> > > 78 1 0
> > > bounces: 29, mode: rnd ver poll, userfaults: 114 153 70 106 78 57 143 92 
> > > 114 96 1 0
> > > bounces: 28, mode: ver poll, userfaults: 96 81 5 45 83 19 98 28 1 145 23 2
> > > bounces: 27, mode: rnd racing poll, userfaults: 54 65 60 54 45 49 1 2 1 2 
> > > 71 20
> > > bounces: 26, mode: racing poll, userfaults: 90 83 35 29 37 35 30 42 3 4 
> > > 49 6
> > > bounces: 25, mode: rnd poll, userfaults: 52 50 178 112 51 41 23 42 18 99 
> > > 59 0
> > > bounces: 24, mode: poll, userfaults: 136 101 83 260 84 29 16 88 1 6 160 57
> > > bounces: 23, mode: rnd racing ver, userfaults: 141 197 158 183 39 49 3 52 
> > > 8 3 6 0
> > > bounces: 22, mode: racing ver, userfaults: 242 266 244 180 162 32 87 43 
> > > 31 40 34 0
> > > bounces: 21, mode: rnd ver, userfaults: 636 158 175 24 253 104 48 8 0 0 0 > > > 0
> > > bounces: 20, mode: ver, userfaults: 531 204 225 117 129 107 11 143 76 31 
> > > 1 0
> > > bounces: 19, mode: rnd racing, userfaults: 303 169 225 145 59 219 37 0 0 
> > > 0 0 0
> > > bounces: 18, mode: racing, userfaults: 374 372 37 144 126 90 25 12 15 17 
> > > 0 0
> > > bounces: 17, mode: rnd, userfaults: 313 412 134 108 80 99 7 56 85 0 0 0
> > > bounces: 16, mode:, userfaults: 431 58 87 167 120 113 98 60 14 8 48 0
> > > bounces: 15, mode: rnd racing ver poll, userfaults: 41 40 25 28 37 24 0 0 
> > > 0 0 180 75
> > > bounces: 14, mode: racing ver poll, userfaults: 43 53 30 28 25 15 19 0 0 
> > > 0 0 30
> > > bounces: 13, mode: rnd ver poll, userfaults: 136 91 114 91 92 79 114 77 
> > > 75 68 1 2
> > > bounces: 12, mode: ver poll, userfaults: 92 120 114 76 153 75 132 157 83 
> > > 81 10 1
> > > bounces: 11, mode: rnd racing poll, userfaults: 50 72 69 52 53 48 46 59 
> > > 57 51 37 1
> > > bounces: 10, mode: racing poll, userfaults: 33 49 38 68 35 63 57 49 49 47 
> > > 25 10
> > > bounces: 9, mode: rnd poll, userfaults: 167 150 67 123 39 75 1 2 9 125 1 1
> > > bounces: 8, mode: poll, userfaults: 147 102 20 87 5 27 118 14 104 40 21 28
> > > bounces: 7, mode: rnd racing ver, userfaults: 305 254 208 74 59 96 36 14 
> > > 11 7 4 5
> > > bounces: 6, mode: racing ver, userfaults: 290 114 191 94 162 114 34 6 6 
> > > 32 23 2
> > > bounces: 5, mode: rnd ver, userfaults: 370 381 22 273 21 106 17 55 0 0 0 0
> > > bounces: 4, mode: ver, userfaults: 328 279 179 191 74 86 95 15 13 10 0 0
> > > bounces: 3, mode: rnd racing, userfaults: 222 215 164 70 5 20 179 0 34 3 
> > > 0 0
> > > bounces: 2, mode: racing, userfaults: 316 385 112 160 225 5 30 49 42 2 4 0
> > > bounces: 1, mode: rnd, userfaults: 273 139 253 176 163 71 85 2 0 0 0 0
> > > bounces: 0, mode:, userfaults: 165 212 633 13 24 66 24 27 15 0 10 1
> > > [PASS]
> > 
> > Hmm, not for me. See below.
> > 
> > What setup were you testing on Bharata?
> 
> I was on commit a94572f5799dd of userfault21 branch in Andrea's tree
> git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git
> 
> #uname -a
> Linux 4.1.0-rc8+ #1 SMP Tue Aug 11 11:33:50 IST 2015 ppc64le ppc64le ppc64le 
> GNU/Linux
> 
> In fact I had successfully done postcopy migration of sPAPR guest with
> this setup.

OK, do you mind testing mainline with the same setup to see if the selftest
passes.

cheers



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-07 Thread Bharata B Rao
On Tue, Sep 08, 2015 at 04:08:06PM +1000, Michael Ellerman wrote:
> On Wed, 2015-08-12 at 10:53 +0530, Bharata B Rao wrote:
> > On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> > > Hello Bharata,
> > > 
> > > On Tue, Aug 11, 2015 at 03:37:29PM +0530, Bharata B Rao wrote:
> > > > May be it is a bit late to bring this up, but I needed the following fix
> > > > to userfault21 branch of your git tree to compile on powerpc.
> > > 
> > > Not late, just in time. I increased the number of syscalls in earlier
> > > versions, it must have gotten lost during a rejecting rebase, sorry.
> > > 
> > > I applied it to my tree and it can be applied to -mm and linux-next,
> > > thanks!
> > > 
> > > The syscall for arm32 are also ready and on their way to the arm tree,
> > > the testsuite worked fine there. ppc also should work fine if you
> > > could confirm it'd be interesting, just beware that I got a typo in
> > > the testcase:
> > 
> > The testsuite passes on powerpc.
> > 
> > 
> > running userfaultfd
> > 
> > nr_pages: 2040, nr_pages_per_cpu: 170
> > bounces: 31, mode: rnd racing ver poll, userfaults: 80 43 23 23 15 16 12 1 
> > 2 96 13 128
> > bounces: 30, mode: racing ver poll, userfaults: 35 54 62 49 47 48 2 8 0 78 
> > 1 0
> > bounces: 29, mode: rnd ver poll, userfaults: 114 153 70 106 78 57 143 92 
> > 114 96 1 0
> > bounces: 28, mode: ver poll, userfaults: 96 81 5 45 83 19 98 28 1 145 23 2
> > bounces: 27, mode: rnd racing poll, userfaults: 54 65 60 54 45 49 1 2 1 2 
> > 71 20
> > bounces: 26, mode: racing poll, userfaults: 90 83 35 29 37 35 30 42 3 4 49 6
> > bounces: 25, mode: rnd poll, userfaults: 52 50 178 112 51 41 23 42 18 99 59 > > 0
> > bounces: 24, mode: poll, userfaults: 136 101 83 260 84 29 16 88 1 6 160 57
> > bounces: 23, mode: rnd racing ver, userfaults: 141 197 158 183 39 49 3 52 8 
> > 3 6 0
> > bounces: 22, mode: racing ver, userfaults: 242 266 244 180 162 32 87 43 31 
> > 40 34 0
> > bounces: 21, mode: rnd ver, userfaults: 636 158 175 24 253 104 48 8 0 0 0 0
> > bounces: 20, mode: ver, userfaults: 531 204 225 117 129 107 11 143 76 31 1 0
> > bounces: 19, mode: rnd racing, userfaults: 303 169 225 145 59 219 37 0 0 0 
> > 0 0
> > bounces: 18, mode: racing, userfaults: 374 372 37 144 126 90 25 12 15 17 0 0
> > bounces: 17, mode: rnd, userfaults: 313 412 134 108 80 99 7 56 85 0 0 0
> > bounces: 16, mode:, userfaults: 431 58 87 167 120 113 98 60 14 8 48 0
> > bounces: 15, mode: rnd racing ver poll, userfaults: 41 40 25 28 37 24 0 0 0 
> > 0 180 75
> > bounces: 14, mode: racing ver poll, userfaults: 43 53 30 28 25 15 19 0 0 0 
> > 0 30
> > bounces: 13, mode: rnd ver poll, userfaults: 136 91 114 91 92 79 114 77 75 
> > 68 1 2
> > bounces: 12, mode: ver poll, userfaults: 92 120 114 76 153 75 132 157 83 81 
> > 10 1
> > bounces: 11, mode: rnd racing poll, userfaults: 50 72 69 52 53 48 46 59 57 
> > 51 37 1
> > bounces: 10, mode: racing poll, userfaults: 33 49 38 68 35 63 57 49 49 47 
> > 25 10
> > bounces: 9, mode: rnd poll, userfaults: 167 150 67 123 39 75 1 2 9 125 1 1
> > bounces: 8, mode: poll, userfaults: 147 102 20 87 5 27 118 14 104 40 21 28
> > bounces: 7, mode: rnd racing ver, userfaults: 305 254 208 74 59 96 36 14 11 
> > 7 4 5
> > bounces: 6, mode: racing ver, userfaults: 290 114 191 94 162 114 34 6 6 32 
> > 23 2
> > bounces: 5, mode: rnd ver, userfaults: 370 381 22 273 21 106 17 55 0 0 0 0
> > bounces: 4, mode: ver, userfaults: 328 279 179 191 74 86 95 15 13 10 0 0
> > bounces: 3, mode: rnd racing, userfaults: 222 215 164 70 5 20 179 0 34 3 0 0
> > bounces: 2, mode: racing, userfaults: 316 385 112 160 225 5 30 49 42 2 4 0
> > bounces: 1, mode: rnd, userfaults: 273 139 253 176 163 71 85 2 0 0 0 0
> > bounces: 0, mode:, userfaults: 165 212 633 13 24 66 24 27 15 0 10 1
> > [PASS]
> 
> Hmm, not for me. See below.
> 
> What setup were you testing on Bharata?

I was on commit a94572f5799dd of userfault21 branch in Andrea's tree
git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git

#uname -a
Linux 4.1.0-rc8+ #1 SMP Tue Aug 11 11:33:50 IST 2015 ppc64le ppc64le ppc64le 
GNU/Linux

In fact I had successfully done postcopy migration of sPAPR guest with
this setup.

> 
> Mine is:
> 
> $ uname -a
> Linux lebuntu 4.2.0-09705-g3a166acc1432 #2 SMP Tue Sep 8 15:18:00 AEST 2015 
> ppc64le ppc64le ppc64le GNU/Linux
> 
> Which is 7d9071a09502 plus a couple of powerpc patches.
> 
> $ zgrep USERFAULTFD /proc/config.gz
> CONFIG_USERFAULTFD=y
> 
> $ sudo ./userfaultfd 128 32
> nr_pages: 2048, nr_pages_per_cpu: 128
> bounces: 31, mode: rnd racing ver poll, error mutex 2 2
> error mutex 2 10

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall

2015-09-07 Thread Michael Ellerman
On Wed, 2015-08-12 at 10:53 +0530, Bharata B Rao wrote:
> On Tue, Aug 11, 2015 at 03:48:26PM +0200, Andrea Arcangeli wrote:
> > Hello Bharata,
> > 
> > On Tue, Aug 11, 2015 at 03:37:29PM +0530, Bharata B Rao wrote:
> > > May be it is a bit late to bring this up, but I needed the following fix
> > > to userfault21 branch of your git tree to compile on powerpc.
> > 
> > Not late, just in time. I increased the number of syscalls in earlier
> > versions, it must have gotten lost during a rejecting rebase, sorry.
> > 
> > I applied it to my tree and it can be applied to -mm and linux-next,
> > thanks!
> > 
> > The syscall for arm32 are also ready and on their way to the arm tree,
> > the testsuite worked fine there. ppc also should work fine if you
> > could confirm it'd be interesting, just beware that I got a typo in
> > the testcase:
> 
> The testsuite passes on powerpc.
> 
> 
> running userfaultfd
> 
> nr_pages: 2040, nr_pages_per_cpu: 170
> bounces: 31, mode: rnd racing ver poll, userfaults: 80 43 23 23 15 16 12 1 2 
> 96 13 128
> bounces: 30, mode: racing ver poll, userfaults: 35 54 62 49 47 48 2 8 0 78 1 0
> bounces: 29, mode: rnd ver poll, userfaults: 114 153 70 106 78 57 143 92 114 
> 96 1 0
> bounces: 28, mode: ver poll, userfaults: 96 81 5 45 83 19 98 28 1 145 23 2
> bounces: 27, mode: rnd racing poll, userfaults: 54 65 60 54 45 49 1 2 1 2 71 
> 20
> bounces: 26, mode: racing poll, userfaults: 90 83 35 29 37 35 30 42 3 4 49 6
> bounces: 25, mode: rnd poll, userfaults: 52 50 178 112 51 41 23 42 18 99 59 0
> bounces: 24, mode: poll, userfaults: 136 101 83 260 84 29 16 88 1 6 160 57
> bounces: 23, mode: rnd racing ver, userfaults: 141 197 158 183 39 49 3 52 8 3 
> 6 0
> bounces: 22, mode: racing ver, userfaults: 242 266 244 180 162 32 87 43 31 40 
> 34 0
> bounces: 21, mode: rnd ver, userfaults: 636 158 175 24 253 104 48 8 0 0 0 0
> bounces: 20, mode: ver, userfaults: 531 204 225 117 129 107 11 143 76 31 1 0
> bounces: 19, mode: rnd racing, userfaults: 303 169 225 145 59 219 37 0 0 0 0 0
> bounces: 18, mode: racing, userfaults: 374 372 37 144 126 90 25 12 15 17 0 0
> bounces: 17, mode: rnd, userfaults: 313 412 134 108 80 99 7 56 85 0 0 0
> bounces: 16, mode:, userfaults: 431 58 87 167 120 113 98 60 14 8 48 0
> bounces: 15, mode: rnd racing ver poll, userfaults: 41 40 25 28 37 24 0 0 0 0 
> 180 75
> bounces: 14, mode: racing ver poll, userfaults: 43 53 30 28 25 15 19 0 0 0 0 
> 30
> bounces: 13, mode: rnd ver poll, userfaults: 136 91 114 91 92 79 114 77 75 68 
> 1 2
> bounces: 12, mode: ver poll, userfaults: 92 120 114 76 153 75 132 157 83 81 
> 10 1
> bounces: 11, mode: rnd racing poll, userfaults: 50 72 69 52 53 48 46 59 57 51 
> 37 1
> bounces: 10, mode: racing poll, userfaults: 33 49 38 68 35 63 57 49 49 47 25 
> 10
> bounces: 9, mode: rnd poll, userfaults: 167 150 67 123 39 75 1 2 9 125 1 1
> bounces: 8, mode: poll, userfaults: 147 102 20 87 5 27 118 14 104 40 21 28
> bounces: 7, mode: rnd racing ver, userfaults: 305 254 208 74 59 96 36 14 11 7 
> 4 5
> bounces: 6, mode: racing ver, userfaults: 290 114 191 94 162 114 34 6 6 32 23 
> 2
> bounces: 5, mode: rnd ver, userfaults: 370 381 22 273 21 106 17 55 0 0 0 0
> bounces: 4, mode: ver, userfaults: 328 279 179 191 74 86 95 15 13 10 0 0
> bounces: 3, mode: rnd racing, userfaults: 222 215 164 70 5 20 179 0 34 3 0 0
> bounces: 2, mode: racing, userfaults: 316 385 112 160 225 5 30 49 42 2 4 0
> bounces: 1, mode: rnd, userfaults: 273 139 253 176 163 71 85 2 0 0 0 0
> bounces: 0, mode:, userfaults: 165 212 633 13 24 66 24 27 15 0 10 1
> [PASS]

Hmm, not for me. See below.

What setup were you testing on Bharata?

Mine is:

$ uname -a
Linux lebuntu 4.2.0-09705-g3a166acc1432 #2 SMP Tue Sep 8 15:18:00 AEST 2015 
ppc64le ppc64le ppc64le GNU/Linux

Which is 7d9071a09502 plus a couple of powerpc patches.

$ zgrep USERFAULTFD /proc/config.gz
CONFIG_USERFAULTFD=y

$ sudo ./userfaultfd 128 32
nr_pages: 2048, nr_pages_per_cpu: 128
bounces: 31, mode: rnd racing ver poll, error mutex 2 2
error mutex 2 10
error mutex 2 15
error mutex 2 21
error mutex 2 22
error mutex 2 27
error mutex 2 36
error mutex 2 39
error mutex 2 40
error mutex 2 41
error mutex 2 43
error mutex 2 75
error mutex 2 79
error mutex 2 83
error mutex 2 100
error mutex 2 108
error mutex 2 110
error mutex 2 114
error mutex 2 119
error mutex 2 120
error mutex 2 135
error mutex 2 137
error mutex 2 141
error mutex 2 142
error mutex 2 144
error mutex 2 145
error mutex 2 150
error mutex 2 151
error mutex 2 159
error mutex 2 161
error mutex 2 169
error mutex 2 172
error mutex 2 174
error mutex 2 175
error mutex 2 176
error mutex 2 178
error mutex 2 188
error mutex 2 194
error mutex 2 208
error mutex 2 210
error mutex 2 212
error mutex 2 220
error mutex 2 223
error mutex 2 224
error mutex 2 226
error mutex 2 236
error mutex 2 249
error mutex 2 252
error mutex 2 255
error mutex 2 256
error mutex 2 267
error mutex 2 277
error mutex 2 284
error mutex 2 295
error mutex 2 302
e