Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-26 Thread Kenneth Lee
On Tue, Nov 20, 2018 at 10:30:55AM +0800, Kenneth Lee wrote:
> Date: Tue, 20 Nov 2018 10:30:55 +0800
> From: Kenneth Lee 
> To: Leon Romanovsky 
> CC: Tim Sell , linux-...@vger.kernel.org,
>  Alexander Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Jason Gunthorpe , Doug Ledford , Uwe
>  Kleine-König , David Kershner
>  , Kenneth Lee , Johan
>  Hovold , Jerome Glisse , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.5.21 (2010-09-15)
> Message-ID: <20181120023055.GG157308@Turing-Arch-b>
> 
> On Mon, Nov 19, 2018 at 12:48:01PM +0200, Leon Romanovsky wrote:
> > Date: Mon, 19 Nov 2018 12:48:01 +0200
> > From: Leon Romanovsky 
> > To: Kenneth Lee 
> > CC: Tim Sell , linux-...@vger.kernel.org,
> >  Alexander Shishkin , Zaibo Xu
> >  , zhangfei@foxmail.com, linux...@huawei.com,
> >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> >  , Gavin Schenk , RDMA mailing
> >  list , Vinod Koul , Jason
> >  Gunthorpe , Doug Ledford , Uwe
> >  Kleine-König , David Kershner
> >  , Kenneth Lee , Johan
> >  Hovold , Cyrille Pitchen
> >  , Sagar Dharia
> >  , Jens Axboe ,
> >  guodong...@linaro.org, linux-netdev , Randy Dunlap
> >  , linux-ker...@vger.kernel.org, Zhou Wang
> >  , linux-crypto@vger.kernel.org, Philippe
> >  Ombredanne , Sanyog Kale ,
> >  "David S. Miller" ,
> >  linux-accelerat...@lists.ozlabs.org, Jerome Glisse 
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.10.1 (2018-07-13)
> > Message-ID: <20181119104801.gf8...@mtr-leonro.mtl.com>
> > 
> > On Mon, Nov 19, 2018 at 05:19:10PM +0800, Kenneth Lee wrote:
> > > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > > > Date: Mon, 19 Nov 2018 17:14:05 +0800
> > > > From: Kenneth Lee 
> > > > To: Leon Romanovsky 
> > > > CC: Tim Sell , linux-...@vger.kernel.org,
> > > >  Alexander Shishkin , Zaibo Xu
> > > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > > >  , Gavin Schenk , RDMA 
> > > > mailing
> > > >  list , Vinod Koul , Jason
> > > >  Gunthorpe , Doug Ledford , Uwe
> > > >  Kleine-König , David Kershner
> > > >  , Kenneth Lee , Johan
> > > >  Hovold , Cyrille Pitchen
> > > >  , Sagar Dharia
> > > >  , Jens Axboe ,
> > > >  guodong...@linaro.org, linux-netdev , Randy 
> > > > Dunlap
> > > >  , linux-ker...@vger.kernel.org, Zhou Wang
> > > >  , linux-crypto@vger.kernel.org, Philippe
> > > >  Ombredanne , Sanyog Kale 
> > > > ,
> > > >  "David S. Miller" ,
> > > >  linux-accelerat...@lists.ozlabs.org
> > > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > > User-Agent: Mutt/1.5.21 (2010-09-15)
> > > > Message-ID: <20181119091405.GE157308@Turing-Arch-b>
> > > >
> > > > On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> > > > > Date: Thu, 15 Nov 2018 16:54:55 +0200
> > > > > From: Leon Romanovsky 
> > > > > To: Kenneth Lee 
> > > > > CC: Kenneth Lee , Tim Sell 
> > > > > ,
> > > > >  linux-...@vger.kernel.org, Alexander Shishkin
> > > > >  , Zaibo Xu ,
> > > > >  zhangfei@foxmail.com, linux...@huawei.com, 
> > > > > haojian.zhu...@linaro.org,
> > > > >  Christoph Lameter , Hao Fang , 
> > > > > Gavin
> > > > >  Schenk , RDMA mailing list
> > > > >  , Zhou Wang , 
> > > > > Jason
> > > > >  Gunthorpe , Doug Ledford , Uwe
> > > > >  Kleine-König , David Kershner
> > > > >  , Johan Hovold , Cyrille
> > > > >  Pitchen , Sagar Dharia
> > > > >  , Jens Axboe ,
> > > > >  guodong...@linaro.org, linux-netdev , Randy 
> > > > > Dunlap
> > > > >  , linux-ker...@v

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-23 Thread Kenneth Lee
On Fri, Nov 23, 2018 at 11:05:04AM -0700, Jason Gunthorpe wrote:
> Date: Fri, 23 Nov 2018 11:05:04 -0700
> From: Jason Gunthorpe 
> To: Kenneth Lee 
> CC: Leon Romanovsky , Kenneth Lee ,
>  Tim Sell , linux-...@vger.kernel.org, Alexander
>  Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Doug Ledford , Uwe Kleine-König
>  , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.9.4 (2018-02-28)
> Message-ID: <20181123180504.ga3...@ziepe.ca>
> 
> On Fri, Nov 23, 2018 at 04:02:42PM +0800, Kenneth Lee wrote:
> 
> > It is already part of Jean's patchset. And that's why I built my solution on
> > VFIO in the first place. But I think the concept of SVA and PASID is not
> > compatible with the original VFIO concept space. You would not share your 
> > whole
> > address space to a device at all in a virtual machine manager,
> > wouldn't you?
> 
> Why not? That seems to fit VFIO's space just fine to me.. You might
> need a new upcall to create a full MM registration, but that doesn't
> seem unsuited.

Because the VM manager (such as qemu) do not want to share its whole space to
the device. It is a security problem.

> 
> Part of the point here is you should try to make sensible revisions to
> existing subsystems before just inventing a new thing...
> 
> VFIO is deeply connected to the IOMMU, so enabling more general IOMMU
> based approache seems perfectly fine to me..
> 
> > > Once the VFIO driver knows about this as a generic capability then the
> > > device it exposes to userspace would use CPU addresses instead of DMA
> > > addresses.
> > > 
> > > The question is if your driver needs much more than the device
> > > agnostic generic services VFIO provides.
> > > 
> > > I'm not sure what you have in mind with resource management.. It is
> > > hard to revoke resources from userspace, unless you are doing
> > > kernel syscalls, but then why do all this?
> > 
> > Say, I have 1024 queues in my accelerator. I can get one by opening the 
> > device
> > and attach it with the fd. If the process exit by any means, the queue can 
> > be
> > returned with the release of the fd. But if it is mdev, it will still be 
> > there
> > and some one should tell the allocator it is available again. This is not 
> > easy
> > to design in user space.
> 
> ?? why wouldn't the mdev track the queues assigned using the existing
> open/close/ioctl callbacks?
> 
> That is basic flow I would expect:
> 
>  open(/dev/vfio)
>  ioctl(unity map entire process MM to mdev with IOMMU)
> 
>  // Create a HQ queue and link the PASID in the HW to this HW queue
>  struct hw queue[..];
>  ioctl(create HW queue)
> 
>  // Get BAR doorbell memory for the queue
>  bar = mmap()
> 
>  // Submit work to the queue using CPU addresses
>  queue[0] = ...
>  writel(bar [..], &queue);
> 
>  // Queue, SVA, etc is cleaned up when the VFIO closes
>  close()

This is not the way that you can use mdev. To use mdev, you have to:

1. unbind kernel driver from the device, and rebind it to vfio driver
2. for 0 to 1204: uuid > /sys/.../the_dev/mdev/create to create all the mdev
3. a virtual iommu_group will be created in /dev/vfio/* from every mdev

now you can do this in you application (even without considering the pasid) :

container = open(/dev/vfio);
ioctl(container, settting);
group = open(/dev/vfio/my_group_for_particular_mdev);
ioctl(container, attach_group, group);
device = ioctl(group, get_device);
mmap(device);
ioctl(container, set_dma_operation);

Then you have to make a decision, how can you find a available mdev for use and
how to return it.

We have considered creating only one mdev and allocating queue when the device
is openned. But the VFIO maintainer, Alex, did not agree and said it broke the
VFIO origin idea.

-Kenneth
> 
> Presumably the kernel has to handle the PASID and related for security
> reasons, so they shouldn't go to userspace?
> 
> If there is something missing in vfio to do this is it looks pretty
> small to me..
> 
> Jason

-- 
-K

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-23 Thread Jason Gunthorpe
On Fri, Nov 23, 2018 at 04:02:42PM +0800, Kenneth Lee wrote:

> It is already part of Jean's patchset. And that's why I built my solution on
> VFIO in the first place. But I think the concept of SVA and PASID is not
> compatible with the original VFIO concept space. You would not share your 
> whole
> address space to a device at all in a virtual machine manager,
> wouldn't you?

Why not? That seems to fit VFIO's space just fine to me.. You might
need a new upcall to create a full MM registration, but that doesn't
seem unsuited.

Part of the point here is you should try to make sensible revisions to
existing subsystems before just inventing a new thing...

VFIO is deeply connected to the IOMMU, so enabling more general IOMMU
based approache seems perfectly fine to me..

> > Once the VFIO driver knows about this as a generic capability then the
> > device it exposes to userspace would use CPU addresses instead of DMA
> > addresses.
> > 
> > The question is if your driver needs much more than the device
> > agnostic generic services VFIO provides.
> > 
> > I'm not sure what you have in mind with resource management.. It is
> > hard to revoke resources from userspace, unless you are doing
> > kernel syscalls, but then why do all this?
> 
> Say, I have 1024 queues in my accelerator. I can get one by opening the device
> and attach it with the fd. If the process exit by any means, the queue can be
> returned with the release of the fd. But if it is mdev, it will still be there
> and some one should tell the allocator it is available again. This is not easy
> to design in user space.

?? why wouldn't the mdev track the queues assigned using the existing
open/close/ioctl callbacks?

That is basic flow I would expect:

 open(/dev/vfio)
 ioctl(unity map entire process MM to mdev with IOMMU)

 // Create a HQ queue and link the PASID in the HW to this HW queue
 struct hw queue[..];
 ioctl(create HW queue)

 // Get BAR doorbell memory for the queue
 bar = mmap()

 // Submit work to the queue using CPU addresses
 queue[0] = ...
 writel(bar [..], &queue);

 // Queue, SVA, etc is cleaned up when the VFIO closes
 close()

Presumably the kernel has to handle the PASID and related for security
reasons, so they shouldn't go to userspace?

If there is something missing in vfio to do this is it looks pretty
small to me..

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-23 Thread Kenneth Lee
On Wed, Nov 21, 2018 at 07:58:40PM -0700, Jason Gunthorpe wrote:
> Date: Wed, 21 Nov 2018 19:58:40 -0700
> From: Jason Gunthorpe 
> To: Kenneth Lee 
> CC: Leon Romanovsky , Kenneth Lee ,
>  Tim Sell , linux-...@vger.kernel.org, Alexander
>  Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Doug Ledford , Uwe Kleine-König
>  , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.9.4 (2018-02-28)
> Message-ID: <20181122025840.gb19...@ziepe.ca>
> 
> On Wed, Nov 21, 2018 at 02:08:05PM +0800, Kenneth Lee wrote:
> 
> > > But considering Jean's SVA stuff seems based on mmu notifiers, I have
> > > a hard time believing that it has any different behavior from RDMA's
> > > ODP, and if it does have different behavior, then it is probably just
> > > a bug in the ODP implementation.
> > 
> > As Jean has explained, his solution is based on page table sharing. I think 
> > ODP
> > should also consider this new feature.
> 
> Shared page tables would require the HW to walk the page table format
> of the CPU directly, not sure how that would be possible for ODP?
> 
> Presumably the implementation for ARM relies on the IOMMU hardware
> doing this?

Yes, that is the idea. And since Jean is merging the AMD and Intel solution
together, I assume they can do the same. This is also the reason I want to solve
my problem on top of IOMMU directly. But anyway, let me try to see if I can
merge the logic with ODP.

> 
> > > > > If all your driver needs is to mmap some PCI bar space, route
> > > > > interrupts and do DMA mapping then mediated VFIO is probably a good
> > > > > choice. 
> > > > 
> > > > Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's 
> > > > opinion and
> > > > try not to add complexity to the mm subsystem.
> > > 
> > > Why would a mediated VFIO driver touch the mm subsystem? Sounds like
> > > you don't have a VFIO driver if it needs to do stuff like that...
> > 
> > VFIO has no ODP-like solution, and if we want to solve the fork problem, we 
> > have
> > to make some change to iommu and the fork procedure. Further, VFIO takes 
> > every
> > queue as a independent device. This create a lot of trouble on resource
> > management. For example, you will need a manager process to withdraw the 
> > unused
> > device and you need to let the user process know about PASID of the queue, 
> > and
> > so on.
> 
> Well, I would think you'd add SVA support to the VFIO driver as a
> generic capability - it seems pretty useful for any VFIO user as it
> avoids all the kernel upcalls to do memory pinning and DMA address
> translation.

It is already part of Jean's patchset. And that's why I built my solution on
VFIO in the first place. But I think the concept of SVA and PASID is not
compatible with the original VFIO concept space. You would not share your whole
address space to a device at all in a virtual machine manager, wouldn't you? And
if you can manage to have a separated mdev for your virtual machine, why bother
to set a PASID to it?  The answer to those problem, I think, will be Intel's
Scalable IO Virtualization. For accelerator, the requirement is simply: getting
a handle to device, attaching the process's mm with the handle by sharing the
process's page table with its iommu indexed by PASID, and start the
communication...

> 
> Once the VFIO driver knows about this as a generic capability then the
> device it exposes to userspace would use CPU addresses instead of DMA
> addresses.
> 
> The question is if your driver needs much more than the device
> agnostic generic services VFIO provides.
> 
> I'm not sure what you have in mind with resource management.. It is
> hard to revoke resources from userspace, unless you are doing
> kernel syscalls, but then why do all this?

Say, I have 1024 queues in my accelerator. I can get one by opening the device
and attach it with the fd. If the process exit by any means, the queue can be
returned with the release of the fd. But if it is mdev, it will still be there
and some one should tell the allocator it is available again. This is not easy
to design in user space.

> 
> Jason

-- 


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-21 Thread Jason Gunthorpe
On Wed, Nov 21, 2018 at 02:08:05PM +0800, Kenneth Lee wrote:

> > But considering Jean's SVA stuff seems based on mmu notifiers, I have
> > a hard time believing that it has any different behavior from RDMA's
> > ODP, and if it does have different behavior, then it is probably just
> > a bug in the ODP implementation.
> 
> As Jean has explained, his solution is based on page table sharing. I think 
> ODP
> should also consider this new feature.

Shared page tables would require the HW to walk the page table format
of the CPU directly, not sure how that would be possible for ODP?

Presumably the implementation for ARM relies on the IOMMU hardware
doing this?

> > > > If all your driver needs is to mmap some PCI bar space, route
> > > > interrupts and do DMA mapping then mediated VFIO is probably a good
> > > > choice. 
> > > 
> > > Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's 
> > > opinion and
> > > try not to add complexity to the mm subsystem.
> > 
> > Why would a mediated VFIO driver touch the mm subsystem? Sounds like
> > you don't have a VFIO driver if it needs to do stuff like that...
> 
> VFIO has no ODP-like solution, and if we want to solve the fork problem, we 
> have
> to make some change to iommu and the fork procedure. Further, VFIO takes every
> queue as a independent device. This create a lot of trouble on resource
> management. For example, you will need a manager process to withdraw the 
> unused
> device and you need to let the user process know about PASID of the queue, and
> so on.

Well, I would think you'd add SVA support to the VFIO driver as a
generic capability - it seems pretty useful for any VFIO user as it
avoids all the kernel upcalls to do memory pinning and DMA address
translation.

Once the VFIO driver knows about this as a generic capability then the
device it exposes to userspace would use CPU addresses instead of DMA
addresses.

The question is if your driver needs much more than the device
agnostic generic services VFIO provides.

I'm not sure what you have in mind with resource management.. It is
hard to revoke resources from userspace, unless you are doing
kernel syscalls, but then why do all this?

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-20 Thread Kenneth Lee
On Mon, Nov 19, 2018 at 08:29:39PM -0700, Jason Gunthorpe wrote:
> Date: Mon, 19 Nov 2018 20:29:39 -0700
> From: Jason Gunthorpe 
> To: Kenneth Lee 
> CC: Leon Romanovsky , Kenneth Lee ,
>  Tim Sell , linux-...@vger.kernel.org, Alexander
>  Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Doug Ledford , Uwe Kleine-König
>  , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.9.4 (2018-02-28)
> Message-ID: <20181120032939.gr4...@ziepe.ca>
> 
> On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> > On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> > > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > > From: Jason Gunthorpe 
> > > To: Kenneth Lee 
> > > CC: Leon Romanovsky , Kenneth Lee ,
> > >  Tim Sell , linux-...@vger.kernel.org, Alexander
> > >  Shishkin , Zaibo Xu
> > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > >  , Gavin Schenk , RDMA 
> > > mailing
> > >  list , Zhou Wang ,
> > >  Doug Ledford , Uwe Kleine-König
> > >  , David Kershner
> > >  , Johan Hovold , Cyrille
> > >  Pitchen , Sagar Dharia
> > >  , Jens Axboe ,
> > >  guodong...@linaro.org, linux-netdev , Randy 
> > > Dunlap
> > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > >  , Sanyog Kale , "David S.
> > >  Miller" , linux-accelerat...@lists.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.9.4 (2018-02-28)
> > > Message-ID: <20181119184954.gb4...@ziepe.ca>
> > > 
> > > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > >  
> > > > If the hardware cannot share page table with the CPU, we then need to 
> > > > have
> > > > some way to change the device page table. This is what happen in ODP. It
> > > > invalidates the page table in device upon mmu_notifier call back. But 
> > > > this cannot
> > > > solve the COW problem: if the user process A share a page P with 
> > > > device, and A 
> > > > forks a new process B, and it continue to write to the page. By COW, the
> > > > process B will keep the page P, while A will get a new page P'. But you 
> > > > have
> > > > no way to let the device know it should use P' rather than P.
> > > 
> > > Is this true? I thought mmu_notifiers covered all these cases.
> > > 
> > > The mm_notifier for A should fire if B causes the physical address of
> > > A's pages to change via COW. 
> > > 
> > > And this causes the device page tables to re-synchronize.
> > 
> > I don't see such code. The current do_cow_fault() implemenation has nothing 
> > to
> > do with mm_notifer.
> 
> Well, that sure sounds like it would be a bug in mmu_notifiers..

Yes, it can be taken that way:) But it is going to be a tough bug.

> 
> But considering Jean's SVA stuff seems based on mmu notifiers, I have
> a hard time believing that it has any different behavior from RDMA's
> ODP, and if it does have different behavior, then it is probably just
> a bug in the ODP implementation.

As Jean has explained, his solution is based on page table sharing. I think ODP
should also consider this new feature.

> 
> > > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it 
> > > > support
> > > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you 
> > > > don't need
> > > > to write any code for that. Because it has been done by IOMMU 
> > > > framework. If it
> > > 
> > > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > > table in the device, not in the CPU.
> > > 
> > > The only case I know if that is different is th

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-20 Thread Kenneth Lee
On Tue, Nov 20, 2018 at 07:17:44AM +0200, Leon Romanovsky wrote:
> Date: Tue, 20 Nov 2018 07:17:44 +0200
> From: Leon Romanovsky 
> To: Kenneth Lee 
> CC: Jason Gunthorpe , Kenneth Lee , Tim
>  Sell , linux-...@vger.kernel.org, Alexander
>  Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Doug Ledford , Uwe Kleine-König
>  , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.10.1 (2018-07-13)
> Message-ID: <20181120051743.gd25...@mtr-leonro.mtl.com>
> 
> On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> > On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> > > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > > From: Jason Gunthorpe 
> > > To: Kenneth Lee 
> > > CC: Leon Romanovsky , Kenneth Lee ,
> > >  Tim Sell , linux-...@vger.kernel.org, Alexander
> > >  Shishkin , Zaibo Xu
> > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > >  , Gavin Schenk , RDMA 
> > > mailing
> > >  list , Zhou Wang ,
> > >  Doug Ledford , Uwe Kleine-König
> > >  , David Kershner
> > >  , Johan Hovold , Cyrille
> > >  Pitchen , Sagar Dharia
> > >  , Jens Axboe ,
> > >  guodong...@linaro.org, linux-netdev , Randy 
> > > Dunlap
> > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > >  , Sanyog Kale , "David S.
> > >  Miller" , linux-accelerat...@lists.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.9.4 (2018-02-28)
> > > Message-ID: <20181119184954.gb4...@ziepe.ca>
> > >
> > > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > >
> > > > If the hardware cannot share page table with the CPU, we then need to 
> > > > have
> > > > some way to change the device page table. This is what happen in ODP. It
> > > > invalidates the page table in device upon mmu_notifier call back. But 
> > > > this cannot
> > > > solve the COW problem: if the user process A share a page P with 
> > > > device, and A
> > > > forks a new process B, and it continue to write to the page. By COW, the
> > > > process B will keep the page P, while A will get a new page P'. But you 
> > > > have
> > > > no way to let the device know it should use P' rather than P.
> > >
> > > Is this true? I thought mmu_notifiers covered all these cases.
> > >
> > > The mm_notifier for A should fire if B causes the physical address of
> > > A's pages to change via COW.
> > >
> > > And this causes the device page tables to re-synchronize.
> >
> > I don't see such code. The current do_cow_fault() implemenation has nothing 
> > to
> > do with mm_notifer.
> >
> > >
> > > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it 
> > > > support
> > > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you 
> > > > don't need
> > > > to write any code for that. Because it has been done by IOMMU 
> > > > framework. If it
> > >
> > > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > > table in the device, not in the CPU.
> > >
> > > The only case I know if that is different is the new-fangled CAPI
> > > stuff where the IOMMU can directly use the CPU's page table and the
> > > IOMMU page table (in device or CPU) is eliminated.
> > >
> >
> > Yes. We are not focusing on the current implementation. As mentioned in the
> > cover letter. We are expecting Jean Philips' SVA patch:
> > git://linux-arm.org/linux-jpb.
> >
> > > Anyhow, I don't think a single instance of hardware should justify an
> > > entire new subsystem. Su

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-20 Thread Jean-Philippe Brucker
On 20/11/2018 09:16, Jonathan Cameron wrote:
> +CC Jean-Phillipe and iommu list.

Thanks for the Cc, sorry I don't have enough bandwidth to follow this
thread at the moment.

> In WarpDrive/uacce, we make this simple. If you support IOMMU and it 
> support
> SVM/SVA. Everything will be fine just like ODP implicit mode. And you 
> don't need
> to write any code for that. Because it has been done by IOMMU framework. 
> If it  

 Looks like the IOMMU code uses mmu_notifier, so it is identical to
 IB's ODP. The only difference is that IB tends to have the IOMMU page
 table in the device, not in the CPU.

 The only case I know if that is different is the new-fangled CAPI
 stuff where the IOMMU can directly use the CPU's page table and the
 IOMMU page table (in device or CPU) is eliminated.  
>>>
>>> Yes. We are not focusing on the current implementation. As mentioned in the
>>> cover letter. We are expecting Jean Philips' SVA patch:
>>> git://linux-arm.org/linux-jpb.  
>>
>> This SVA stuff does not look comparable to CAPI as it still requires
>> maintaining seperate IOMMU page tables.

With SVA, we use the same page tables in the IOMMU and CPU. It's the
same pgd pointer, there is no mirroring of mappings. We bind the process
page tables with the device using a PASID (Process Address Space ID).

After fork(), the child's mm is different from the parent's one, and is
not automatically bound to the device. The device driver will have to
issue a new bind() request, and the child mm will be bound with a
different PASID.

There could be a problem if the child inherits the parent's device
handle. Then depending on the device, the child could be able to program
DMA and possibly access the parent's address space. The parent needs to
be aware of that when using the bind() API, and close the device fd in
the child after fork().

We use MMU notifiers for some address space changes:

* The IOTLB needs to be invalidated after any unmap() to the process
address space. On Arm systems the SMMU IOTLBs can be invalidated by the
CPU TLBI instructions, but we still need to invalidate TLBs private to
devices that are arch-agnostic (Address Translation Cache in PCI ATS).

* When the process mm exits, we need to remove the associated PASID
configuration in the IOMMU and invalidate the TLBs.

Thanks,
Jean


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-20 Thread Jonathan Cameron
+CC Jean-Phillipe and iommu list.


On Mon, 19 Nov 2018 20:29:39 -0700
Jason Gunthorpe  wrote:

> On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> > On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:  
> > > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > > From: Jason Gunthorpe 
> > > To: Kenneth Lee 
> > > CC: Leon Romanovsky , Kenneth Lee ,
> > >  Tim Sell , linux-...@vger.kernel.org, Alexander
> > >  Shishkin , Zaibo Xu
> > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > >  , Gavin Schenk , RDMA 
> > > mailing
> > >  list , Zhou Wang ,
> > >  Doug Ledford , Uwe Kleine-König
> > >  , David Kershner
> > >  , Johan Hovold , Cyrille
> > >  Pitchen , Sagar Dharia
> > >  , Jens Axboe ,
> > >  guodong...@linaro.org, linux-netdev , Randy 
> > > Dunlap
> > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > >  , Sanyog Kale , "David S.
> > >  Miller" , linux-accelerat...@lists.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.9.4 (2018-02-28)
> > > Message-ID: <20181119184954.gb4...@ziepe.ca>
> > > 
> > > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > >
> > > > If the hardware cannot share page table with the CPU, we then need to 
> > > > have
> > > > some way to change the device page table. This is what happen in ODP. It
> > > > invalidates the page table in device upon mmu_notifier call back. But 
> > > > this cannot
> > > > solve the COW problem: if the user process A share a page P with 
> > > > device, and A 
> > > > forks a new process B, and it continue to write to the page. By COW, the
> > > > process B will keep the page P, while A will get a new page P'. But you 
> > > > have
> > > > no way to let the device know it should use P' rather than P.  
> > > 
> > > Is this true? I thought mmu_notifiers covered all these cases.
> > > 
> > > The mm_notifier for A should fire if B causes the physical address of
> > > A's pages to change via COW. 
> > > 
> > > And this causes the device page tables to re-synchronize.  
> > 
> > I don't see such code. The current do_cow_fault() implemenation has nothing 
> > to
> > do with mm_notifer.  
> 
> Well, that sure sounds like it would be a bug in mmu_notifiers..
> 
> But considering Jean's SVA stuff seems based on mmu notifiers, I have
> a hard time believing that it has any different behavior from RDMA's
> ODP, and if it does have different behavior, then it is probably just
> a bug in the ODP implementation.
> 
> > > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it 
> > > > support
> > > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you 
> > > > don't need
> > > > to write any code for that. Because it has been done by IOMMU 
> > > > framework. If it  
> > > 
> > > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > > table in the device, not in the CPU.
> > > 
> > > The only case I know if that is different is the new-fangled CAPI
> > > stuff where the IOMMU can directly use the CPU's page table and the
> > > IOMMU page table (in device or CPU) is eliminated.  
> >
> > Yes. We are not focusing on the current implementation. As mentioned in the
> > cover letter. We are expecting Jean Philips' SVA patch:
> > git://linux-arm.org/linux-jpb.  
> 
> This SVA stuff does not look comparable to CAPI as it still requires
> maintaining seperate IOMMU page tables.
> 
> Also, those patches from Jean have a lot of references to
> mmu_notifiers (ie look at iommu_mmu_notifier).
> 
> Are you really sure it is actually any different at all?
> 
> > > Anyhow, I don't think a single instance of hardware should justify an
> > > entire new subsystem. Subsystems are hard to make and without multiple
> > > hardware examples there is no way to expect that it would cover any
> > > future use cases.  
> > 
> > Yes. That's our first expectation. We can keep it with our driver. But 
> > because
> > there 

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Leon Romanovsky
On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > From: Jason Gunthorpe 
> > To: Kenneth Lee 
> > CC: Leon Romanovsky , Kenneth Lee ,
> >  Tim Sell , linux-...@vger.kernel.org, Alexander
> >  Shishkin , Zaibo Xu
> >  , zhangfei@foxmail.com, linux...@huawei.com,
> >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> >  , Gavin Schenk , RDMA mailing
> >  list , Zhou Wang ,
> >  Doug Ledford , Uwe Kleine-König
> >  , David Kershner
> >  , Johan Hovold , Cyrille
> >  Pitchen , Sagar Dharia
> >  , Jens Axboe ,
> >  guodong...@linaro.org, linux-netdev , Randy Dunlap
> >  , linux-ker...@vger.kernel.org, Vinod Koul
> >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> >  , Sanyog Kale , "David S.
> >  Miller" , linux-accelerat...@lists.ozlabs.org
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.9.4 (2018-02-28)
> > Message-ID: <20181119184954.gb4...@ziepe.ca>
> >
> > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> >
> > > If the hardware cannot share page table with the CPU, we then need to have
> > > some way to change the device page table. This is what happen in ODP. It
> > > invalidates the page table in device upon mmu_notifier call back. But 
> > > this cannot
> > > solve the COW problem: if the user process A share a page P with device, 
> > > and A
> > > forks a new process B, and it continue to write to the page. By COW, the
> > > process B will keep the page P, while A will get a new page P'. But you 
> > > have
> > > no way to let the device know it should use P' rather than P.
> >
> > Is this true? I thought mmu_notifiers covered all these cases.
> >
> > The mm_notifier for A should fire if B causes the physical address of
> > A's pages to change via COW.
> >
> > And this causes the device page tables to re-synchronize.
>
> I don't see such code. The current do_cow_fault() implemenation has nothing to
> do with mm_notifer.
>
> >
> > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it 
> > > support
> > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you 
> > > don't need
> > > to write any code for that. Because it has been done by IOMMU framework. 
> > > If it
> >
> > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > table in the device, not in the CPU.
> >
> > The only case I know if that is different is the new-fangled CAPI
> > stuff where the IOMMU can directly use the CPU's page table and the
> > IOMMU page table (in device or CPU) is eliminated.
> >
>
> Yes. We are not focusing on the current implementation. As mentioned in the
> cover letter. We are expecting Jean Philips' SVA patch:
> git://linux-arm.org/linux-jpb.
>
> > Anyhow, I don't think a single instance of hardware should justify an
> > entire new subsystem. Subsystems are hard to make and without multiple
> > hardware examples there is no way to expect that it would cover any
> > future use cases.
>
> Yes. That's our first expectation. We can keep it with our driver. But because
> there is no user driver support for any accelerator in mainline kernel. Even 
> the
> well known QuickAssit has to be maintained out of tree. So we try to see if
> people is interested in working together to solve the problem.
>
> >
> > If all your driver needs is to mmap some PCI bar space, route
> > interrupts and do DMA mapping then mediated VFIO is probably a good
> > choice.
>
> Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion 
> and
> try not to add complexity to the mm subsystem.
>
> >
> > If it needs to do a bunch of other stuff, not related to PCI bar
> > space, interrupts and DMA mapping (ie special code for compression,
> > crypto, AI, whatever) then you should probably do what Jerome said and
> > make a drivers/char/hisillicon_foo_bar.c that exposes just what your
> > hardware does.
>
> Yes. If no other accelerator driver writer is interested. That is the
> expectation:)
>
> But we really like to have a public solution here. Consider this scenario:
>
> You create some connections (queues) to NIC, RSA, and AI engine. Then you got
> data direct from t

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > From: Jason Gunthorpe 
> > To: Kenneth Lee 
> > CC: Leon Romanovsky , Kenneth Lee ,
> >  Tim Sell , linux-...@vger.kernel.org, Alexander
> >  Shishkin , Zaibo Xu
> >  , zhangfei@foxmail.com, linux...@huawei.com,
> >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> >  , Gavin Schenk , RDMA mailing
> >  list , Zhou Wang ,
> >  Doug Ledford , Uwe Kleine-König
> >  , David Kershner
> >  , Johan Hovold , Cyrille
> >  Pitchen , Sagar Dharia
> >  , Jens Axboe ,
> >  guodong...@linaro.org, linux-netdev , Randy Dunlap
> >  , linux-ker...@vger.kernel.org, Vinod Koul
> >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> >  , Sanyog Kale , "David S.
> >  Miller" , linux-accelerat...@lists.ozlabs.org
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.9.4 (2018-02-28)
> > Message-ID: <20181119184954.gb4...@ziepe.ca>
> > 
> > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> >  
> > > If the hardware cannot share page table with the CPU, we then need to have
> > > some way to change the device page table. This is what happen in ODP. It
> > > invalidates the page table in device upon mmu_notifier call back. But 
> > > this cannot
> > > solve the COW problem: if the user process A share a page P with device, 
> > > and A 
> > > forks a new process B, and it continue to write to the page. By COW, the
> > > process B will keep the page P, while A will get a new page P'. But you 
> > > have
> > > no way to let the device know it should use P' rather than P.
> > 
> > Is this true? I thought mmu_notifiers covered all these cases.
> > 
> > The mm_notifier for A should fire if B causes the physical address of
> > A's pages to change via COW. 
> > 
> > And this causes the device page tables to re-synchronize.
> 
> I don't see such code. The current do_cow_fault() implemenation has nothing to
> do with mm_notifer.

Well, that sure sounds like it would be a bug in mmu_notifiers..

But considering Jean's SVA stuff seems based on mmu notifiers, I have
a hard time believing that it has any different behavior from RDMA's
ODP, and if it does have different behavior, then it is probably just
a bug in the ODP implementation.

> > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it 
> > > support
> > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you 
> > > don't need
> > > to write any code for that. Because it has been done by IOMMU framework. 
> > > If it
> > 
> > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > table in the device, not in the CPU.
> > 
> > The only case I know if that is different is the new-fangled CAPI
> > stuff where the IOMMU can directly use the CPU's page table and the
> > IOMMU page table (in device or CPU) is eliminated.
>
> Yes. We are not focusing on the current implementation. As mentioned in the
> cover letter. We are expecting Jean Philips' SVA patch:
> git://linux-arm.org/linux-jpb.

This SVA stuff does not look comparable to CAPI as it still requires
maintaining seperate IOMMU page tables.

Also, those patches from Jean have a lot of references to
mmu_notifiers (ie look at iommu_mmu_notifier).

Are you really sure it is actually any different at all?

> > Anyhow, I don't think a single instance of hardware should justify an
> > entire new subsystem. Subsystems are hard to make and without multiple
> > hardware examples there is no way to expect that it would cover any
> > future use cases.
> 
> Yes. That's our first expectation. We can keep it with our driver. But because
> there is no user driver support for any accelerator in mainline kernel. Even 
> the
> well known QuickAssit has to be maintained out of tree. So we try to see if
> people is interested in working together to solve the problem.

Well, you should come with patches ack'ed by these other groups.

> > If all your driver needs is to mmap some PCI bar space, route
> > interrupts and do DMA mapping then mediated VFIO is probably a good
> > choice. 
> 
> Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion 
> and
> try not to add complexity to the mm subsystem.

Why would 

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Kenneth Lee
On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> Date: Mon, 19 Nov 2018 11:49:54 -0700
> From: Jason Gunthorpe 
> To: Kenneth Lee 
> CC: Leon Romanovsky , Kenneth Lee ,
>  Tim Sell , linux-...@vger.kernel.org, Alexander
>  Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Doug Ledford , Uwe Kleine-König
>  , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.9.4 (2018-02-28)
> Message-ID: <20181119184954.gb4...@ziepe.ca>
> 
> On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
>  
> > If the hardware cannot share page table with the CPU, we then need to have
> > some way to change the device page table. This is what happen in ODP. It
> > invalidates the page table in device upon mmu_notifier call back. But this 
> > cannot
> > solve the COW problem: if the user process A share a page P with device, 
> > and A 
> > forks a new process B, and it continue to write to the page. By COW, the
> > process B will keep the page P, while A will get a new page P'. But you have
> > no way to let the device know it should use P' rather than P.
> 
> Is this true? I thought mmu_notifiers covered all these cases.
> 
> The mm_notifier for A should fire if B causes the physical address of
> A's pages to change via COW. 
> 
> And this causes the device page tables to re-synchronize.

I don't see such code. The current do_cow_fault() implemenation has nothing to
do with mm_notifer.

> 
> > In WarpDrive/uacce, we make this simple. If you support IOMMU and it support
> > SVM/SVA. Everything will be fine just like ODP implicit mode. And you don't 
> > need
> > to write any code for that. Because it has been done by IOMMU framework. If 
> > it
> 
> Looks like the IOMMU code uses mmu_notifier, so it is identical to
> IB's ODP. The only difference is that IB tends to have the IOMMU page
> table in the device, not in the CPU.
> 
> The only case I know if that is different is the new-fangled CAPI
> stuff where the IOMMU can directly use the CPU's page table and the
> IOMMU page table (in device or CPU) is eliminated.
> 

Yes. We are not focusing on the current implementation. As mentioned in the
cover letter. We are expecting Jean Philips' SVA patch:
git://linux-arm.org/linux-jpb.

> Anyhow, I don't think a single instance of hardware should justify an
> entire new subsystem. Subsystems are hard to make and without multiple
> hardware examples there is no way to expect that it would cover any
> future use cases.

Yes. That's our first expectation. We can keep it with our driver. But because
there is no user driver support for any accelerator in mainline kernel. Even the
well known QuickAssit has to be maintained out of tree. So we try to see if
people is interested in working together to solve the problem.

> 
> If all your driver needs is to mmap some PCI bar space, route
> interrupts and do DMA mapping then mediated VFIO is probably a good
> choice. 

Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion and
try not to add complexity to the mm subsystem.

> 
> If it needs to do a bunch of other stuff, not related to PCI bar
> space, interrupts and DMA mapping (ie special code for compression,
> crypto, AI, whatever) then you should probably do what Jerome said and
> make a drivers/char/hisillicon_foo_bar.c that exposes just what your
> hardware does.

Yes. If no other accelerator driver writer is interested. That is the
expectation:)

But we really like to have a public solution here. Consider this scenario:

You create some connections (queues) to NIC, RSA, and AI engine. Then you got
data direct from the NIC and pass the pointer to RSA engine for decryption. The
CPU then finish some data taking or operation and then pass through to the AI
engine for CNN calculationThis will need a place to maintain the same
address space by some means.

It is not complex, but it is helpful.

> 
> If you have networking involved in here then consider RDMA,
> particularly if this functionality is already part of the same
> hardware that the hns infiniband driver is servicing.
> 
> 'computational MRs' are a reasonable approach to a side-car offload of
> already existing RDMA support.

OK. Thanks. I will spend some time on it. But personally, I really don't like
RDMA's complexity. I cannot even try one single function without a...some
expensive hardwares and complexity connection in the lab. This is not like a
open source way.

> 
> Jason



Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Kenneth Lee
On Mon, Nov 19, 2018 at 12:48:01PM +0200, Leon Romanovsky wrote:
> Date: Mon, 19 Nov 2018 12:48:01 +0200
> From: Leon Romanovsky 
> To: Kenneth Lee 
> CC: Tim Sell , linux-...@vger.kernel.org,
>  Alexander Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Vinod Koul , Jason
>  Gunthorpe , Doug Ledford , Uwe
>  Kleine-König , David Kershner
>  , Kenneth Lee , Johan
>  Hovold , Cyrille Pitchen
>  , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Zhou Wang
>  , linux-crypto@vger.kernel.org, Philippe
>  Ombredanne , Sanyog Kale ,
>  "David S. Miller" ,
>  linux-accelerat...@lists.ozlabs.org, Jerome Glisse 
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.10.1 (2018-07-13)
> Message-ID: <20181119104801.gf8...@mtr-leonro.mtl.com>
> 
> On Mon, Nov 19, 2018 at 05:19:10PM +0800, Kenneth Lee wrote:
> > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > > Date: Mon, 19 Nov 2018 17:14:05 +0800
> > > From: Kenneth Lee 
> > > To: Leon Romanovsky 
> > > CC: Tim Sell , linux-...@vger.kernel.org,
> > >  Alexander Shishkin , Zaibo Xu
> > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > >  , Gavin Schenk , RDMA 
> > > mailing
> > >  list , Vinod Koul , Jason
> > >  Gunthorpe , Doug Ledford , Uwe
> > >  Kleine-König , David Kershner
> > >  , Kenneth Lee , Johan
> > >  Hovold , Cyrille Pitchen
> > >  , Sagar Dharia
> > >  , Jens Axboe ,
> > >  guodong...@linaro.org, linux-netdev , Randy 
> > > Dunlap
> > >  , linux-ker...@vger.kernel.org, Zhou Wang
> > >  , linux-crypto@vger.kernel.org, Philippe
> > >  Ombredanne , Sanyog Kale ,
> > >  "David S. Miller" ,
> > >  linux-accelerat...@lists.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.5.21 (2010-09-15)
> > > Message-ID: <20181119091405.GE157308@Turing-Arch-b>
> > >
> > > On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> > > > Date: Thu, 15 Nov 2018 16:54:55 +0200
> > > > From: Leon Romanovsky 
> > > > To: Kenneth Lee 
> > > > CC: Kenneth Lee , Tim Sell 
> > > > ,
> > > >  linux-...@vger.kernel.org, Alexander Shishkin
> > > >  , Zaibo Xu ,
> > > >  zhangfei@foxmail.com, linux...@huawei.com, 
> > > > haojian.zhu...@linaro.org,
> > > >  Christoph Lameter , Hao Fang , 
> > > > Gavin
> > > >  Schenk , RDMA mailing list
> > > >  , Zhou Wang , 
> > > > Jason
> > > >  Gunthorpe , Doug Ledford , Uwe
> > > >  Kleine-König , David Kershner
> > > >  , Johan Hovold , Cyrille
> > > >  Pitchen , Sagar Dharia
> > > >  , Jens Axboe ,
> > > >  guodong...@linaro.org, linux-netdev , Randy 
> > > > Dunlap
> > > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > > >  , Sanyog Kale , "David 
> > > > S.
> > > >  Miller" , linux-accelerat...@lists.ozlabs.org
> > > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > > User-Agent: Mutt/1.10.1 (2018-07-13)
> > > > Message-ID: <20181115145455.gn3...@mtr-leonro.mtl.com>
> > > >
> > > > On Thu, Nov 15, 2018 at 04:51:09PM +0800, Kenneth Lee wrote:
> > > > > On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> > > > > > Date: Wed, 14 Nov 2018 18:00:17 +0200
> > > > > > From: Leon Romanovsky 
> > > > > > To: Kenneth Lee 
> > > > > > CC: Tim Sell , linux-...@vger.kernel.org,
> > > > > >  Alexander Shishkin , Zaibo Xu
> > > > > >  , zhangfei@foxmail.com, 
> > > > > > linux...@huawei.com,
> > > > > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao 
> > > > > > Fang
> > > > > >  , Gavin Schenk , RDMA 
> > > > > > mailing
> > > > > >  list , Zhou Wang 
> > > > > > ,
> > > > > >  Jason Gunthorpe , Doug Ledford 
> > > > > > , U

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 04:33:20PM -0500, Jerome Glisse wrote:
> On Mon, Nov 19, 2018 at 02:26:38PM -0700, Jason Gunthorpe wrote:
> > On Mon, Nov 19, 2018 at 03:26:15PM -0500, Jerome Glisse wrote:
> > > On Mon, Nov 19, 2018 at 01:11:56PM -0700, Jason Gunthorpe wrote:
> > > > On Mon, Nov 19, 2018 at 02:46:32PM -0500, Jerome Glisse wrote:
> > > > 
> > > > > > ?? How can O_DIRECT be fine but RDMA not? They use exactly the same
> > > > > > get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
> > > > > > be fine too?
> > > > > > 
> > > > > > AFAIK the only difference is the length of the race window. You'd 
> > > > > > have
> > > > > > to fork and fault during the shorter time O_DIRECT has 
> > > > > > get_user_pages
> > > > > > open.
> > > > > 
> > > > > Well in O_DIRECT case there is only one page table, the CPU
> > > > > page table and it gets updated during fork() so there is an
> > > > > ordering there and the race window is small.
> > > > 
> > > > Not really, in O_DIRECT case there is another 'page table', we just
> > > > call it a DMA scatter/gather list and it is sent directly to the block
> > > > device's DMA HW. The sgl plays exactly the same role as the various HW
> > > > page list data structures that underly RDMA MRs.
> > > > 
> > > > It is not a page table that matters here, it is if the DMA address of
> > > > the page is active for DMA on HW.
> > > > 
> > > > Like you say, the only difference is that the race is hopefully small
> > > > with O_DIRECT (though that is not really small, NVMeof for instance
> > > > has windows as large as connection timeouts, if you try hard enough)
> > > > 
> > > > So we probably can trigger this trouble with O_DIRECT and fork(), and
> > > > I would call it a bug :(
> > > 
> > > I can not think of any scenario that would be a bug with O_DIRECT.
> > > Do you have one in mind ? When you fork() and do other syscall that
> > > affect the memory of your process in another thread you should
> > > expect non consistant results. Kernel is not here to provide a fully
> > > safe environement to user, user can shoot itself in the foot and
> > > that's fine as long as it only affect the process itself and no one
> > > else. We should not be in the business of making everything baby
> > > proof :)
> > 
> > Sure, I setup AIO with O_DIRECT and launch a read.
> > 
> > Then I fork and dirty the READ target memory using the CPU in the
> > child.
> > 
> > As you described in this case the fork will retain the physical page
> > that is undergoing O_DIRECT DMA, and the parent gets a new copy'd page.
> > 
> > The DMA completes, and the child gets the DMA'd to page. The parent
> > gets an unchanged copy'd page.
> > 
> > The parent gets the AIO completion, but can't see the data.
> > 
> > I'd call that a bug with O_DIRECT. The only correct outcome is that
> > the parent will always see the O_DIRECT data. Fork should not cause
> > the *parent* to malfunction. I agree the child cannot make any
> > prediction what memory it will see.
> > 
> > I assume the same flow is possible using threads and read()..
> > 
> > It is really no different than the RDMA bug with fork.
> > 
> 
> Yes and that's expected behavior :) If you fork() and have anything
> still in flight at time of fork that can change your process address
> space (including data in it) then all bets are of.
> 
> At least this is my reading of fork() syscall.

Not mine.. I can't think of anything else that would have this
behavior.

All traditional syscalls, will properly dirty the pages of the
parent. ie if I call read() in a thread and do fork in another thread,
then not seeing the data after read() completes is clearly a bug. All
other syscalls are the same.

It is bonkers that opening the file with O_DIRECT would change this
basic behavior. I'm calling it a bug :)

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 02:26:38PM -0700, Jason Gunthorpe wrote:
> On Mon, Nov 19, 2018 at 03:26:15PM -0500, Jerome Glisse wrote:
> > On Mon, Nov 19, 2018 at 01:11:56PM -0700, Jason Gunthorpe wrote:
> > > On Mon, Nov 19, 2018 at 02:46:32PM -0500, Jerome Glisse wrote:
> > > 
> > > > > ?? How can O_DIRECT be fine but RDMA not? They use exactly the same
> > > > > get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
> > > > > be fine too?
> > > > > 
> > > > > AFAIK the only difference is the length of the race window. You'd have
> > > > > to fork and fault during the shorter time O_DIRECT has get_user_pages
> > > > > open.
> > > > 
> > > > Well in O_DIRECT case there is only one page table, the CPU
> > > > page table and it gets updated during fork() so there is an
> > > > ordering there and the race window is small.
> > > 
> > > Not really, in O_DIRECT case there is another 'page table', we just
> > > call it a DMA scatter/gather list and it is sent directly to the block
> > > device's DMA HW. The sgl plays exactly the same role as the various HW
> > > page list data structures that underly RDMA MRs.
> > > 
> > > It is not a page table that matters here, it is if the DMA address of
> > > the page is active for DMA on HW.
> > > 
> > > Like you say, the only difference is that the race is hopefully small
> > > with O_DIRECT (though that is not really small, NVMeof for instance
> > > has windows as large as connection timeouts, if you try hard enough)
> > > 
> > > So we probably can trigger this trouble with O_DIRECT and fork(), and
> > > I would call it a bug :(
> > 
> > I can not think of any scenario that would be a bug with O_DIRECT.
> > Do you have one in mind ? When you fork() and do other syscall that
> > affect the memory of your process in another thread you should
> > expect non consistant results. Kernel is not here to provide a fully
> > safe environement to user, user can shoot itself in the foot and
> > that's fine as long as it only affect the process itself and no one
> > else. We should not be in the business of making everything baby
> > proof :)
> 
> Sure, I setup AIO with O_DIRECT and launch a read.
> 
> Then I fork and dirty the READ target memory using the CPU in the
> child.
> 
> As you described in this case the fork will retain the physical page
> that is undergoing O_DIRECT DMA, and the parent gets a new copy'd page.
> 
> The DMA completes, and the child gets the DMA'd to page. The parent
> gets an unchanged copy'd page.
> 
> The parent gets the AIO completion, but can't see the data.
> 
> I'd call that a bug with O_DIRECT. The only correct outcome is that
> the parent will always see the O_DIRECT data. Fork should not cause
> the *parent* to malfunction. I agree the child cannot make any
> prediction what memory it will see.
> 
> I assume the same flow is possible using threads and read()..
> 
> It is really no different than the RDMA bug with fork.
> 

Yes and that's expected behavior :) If you fork() and have anything
still in flight at time of fork that can change your process address
space (including data in it) then all bets are of.

At least this is my reading of fork() syscall.

Cheers,
Jérôme


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 03:26:15PM -0500, Jerome Glisse wrote:
> On Mon, Nov 19, 2018 at 01:11:56PM -0700, Jason Gunthorpe wrote:
> > On Mon, Nov 19, 2018 at 02:46:32PM -0500, Jerome Glisse wrote:
> > 
> > > > ?? How can O_DIRECT be fine but RDMA not? They use exactly the same
> > > > get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
> > > > be fine too?
> > > > 
> > > > AFAIK the only difference is the length of the race window. You'd have
> > > > to fork and fault during the shorter time O_DIRECT has get_user_pages
> > > > open.
> > > 
> > > Well in O_DIRECT case there is only one page table, the CPU
> > > page table and it gets updated during fork() so there is an
> > > ordering there and the race window is small.
> > 
> > Not really, in O_DIRECT case there is another 'page table', we just
> > call it a DMA scatter/gather list and it is sent directly to the block
> > device's DMA HW. The sgl plays exactly the same role as the various HW
> > page list data structures that underly RDMA MRs.
> > 
> > It is not a page table that matters here, it is if the DMA address of
> > the page is active for DMA on HW.
> > 
> > Like you say, the only difference is that the race is hopefully small
> > with O_DIRECT (though that is not really small, NVMeof for instance
> > has windows as large as connection timeouts, if you try hard enough)
> > 
> > So we probably can trigger this trouble with O_DIRECT and fork(), and
> > I would call it a bug :(
> 
> I can not think of any scenario that would be a bug with O_DIRECT.
> Do you have one in mind ? When you fork() and do other syscall that
> affect the memory of your process in another thread you should
> expect non consistant results. Kernel is not here to provide a fully
> safe environement to user, user can shoot itself in the foot and
> that's fine as long as it only affect the process itself and no one
> else. We should not be in the business of making everything baby
> proof :)

Sure, I setup AIO with O_DIRECT and launch a read.

Then I fork and dirty the READ target memory using the CPU in the
child.

As you described in this case the fork will retain the physical page
that is undergoing O_DIRECT DMA, and the parent gets a new copy'd page.

The DMA completes, and the child gets the DMA'd to page. The parent
gets an unchanged copy'd page.

The parent gets the AIO completion, but can't see the data.

I'd call that a bug with O_DIRECT. The only correct outcome is that
the parent will always see the O_DIRECT data. Fork should not cause
the *parent* to malfunction. I agree the child cannot make any
prediction what memory it will see.

I assume the same flow is possible using threads and read()..

It is really no different than the RDMA bug with fork.

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 01:11:56PM -0700, Jason Gunthorpe wrote:
> On Mon, Nov 19, 2018 at 02:46:32PM -0500, Jerome Glisse wrote:
> 
> > > ?? How can O_DIRECT be fine but RDMA not? They use exactly the same
> > > get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
> > > be fine too?
> > > 
> > > AFAIK the only difference is the length of the race window. You'd have
> > > to fork and fault during the shorter time O_DIRECT has get_user_pages
> > > open.
> > 
> > Well in O_DIRECT case there is only one page table, the CPU
> > page table and it gets updated during fork() so there is an
> > ordering there and the race window is small.
> 
> Not really, in O_DIRECT case there is another 'page table', we just
> call it a DMA scatter/gather list and it is sent directly to the block
> device's DMA HW. The sgl plays exactly the same role as the various HW
> page list data structures that underly RDMA MRs.
> 
> It is not a page table that matters here, it is if the DMA address of
> the page is active for DMA on HW.
> 
> Like you say, the only difference is that the race is hopefully small
> with O_DIRECT (though that is not really small, NVMeof for instance
> has windows as large as connection timeouts, if you try hard enough)
> 
> So we probably can trigger this trouble with O_DIRECT and fork(), and
> I would call it a bug :(

I can not think of any scenario that would be a bug with O_DIRECT.
Do you have one in mind ? When you fork() and do other syscall that
affect the memory of your process in another thread you should
expect non consistant results. Kernel is not here to provide a fully
safe environement to user, user can shoot itself in the foot and
that's fine as long as it only affect the process itself and no one
else. We should not be in the business of making everything baby
proof :)

> 
> > > Why? Keep track in each mm if there are any active get_user_pages
> > > FOLL_WRITE pages in the mm, if yes then sweep the VMAs and fix the
> > > issue for the FOLL_WRITE pages.
> > 
> > This has a cost and you don't want to do it for O_DIRECT. I am pretty
> > sure that any such patch to modify fork() code path would be rejected.
> > At least i would not like it and vote against.
> 
> I was thinking the incremental cost on top of what John is already
> doing would be very small in the common case and only be triggered in
> cases that matter (which apps should avoid anyhow).

What John is addressing has nothing to do with fork() it has to do with
GUP and filesystem page. More specificaly that after page_mkclean() all
filesystem expect that the page content is stable (ie no one write to
the page) with GUP and hardware (DIRECT_IO too) this is not necessarily
the case.

So John is trying to fix that. Not trying to make fork() baby proof
AFAICT :)

I rather keep saying that you should expect weird thing with RDMA and
VFIO when doing fork() than trying to work around this in the kernel.

Better behavior through hardware is what we should aim for (CAPI, ODP,
...).

Jérôme


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 02:46:32PM -0500, Jerome Glisse wrote:

> > ?? How can O_DIRECT be fine but RDMA not? They use exactly the same
> > get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
> > be fine too?
> > 
> > AFAIK the only difference is the length of the race window. You'd have
> > to fork and fault during the shorter time O_DIRECT has get_user_pages
> > open.
> 
> Well in O_DIRECT case there is only one page table, the CPU
> page table and it gets updated during fork() so there is an
> ordering there and the race window is small.

Not really, in O_DIRECT case there is another 'page table', we just
call it a DMA scatter/gather list and it is sent directly to the block
device's DMA HW. The sgl plays exactly the same role as the various HW
page list data structures that underly RDMA MRs.

It is not a page table that matters here, it is if the DMA address of
the page is active for DMA on HW.

Like you say, the only difference is that the race is hopefully small
with O_DIRECT (though that is not really small, NVMeof for instance
has windows as large as connection timeouts, if you try hard enough)

So we probably can trigger this trouble with O_DIRECT and fork(), and
I would call it a bug :(

> > Why? Keep track in each mm if there are any active get_user_pages
> > FOLL_WRITE pages in the mm, if yes then sweep the VMAs and fix the
> > issue for the FOLL_WRITE pages.
> 
> This has a cost and you don't want to do it for O_DIRECT. I am pretty
> sure that any such patch to modify fork() code path would be rejected.
> At least i would not like it and vote against.

I was thinking the incremental cost on top of what John is already
doing would be very small in the common case and only be triggered in
cases that matter (which apps should avoid anyhow).

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 12:27:02PM -0700, Jason Gunthorpe wrote:
> On Mon, Nov 19, 2018 at 02:17:21PM -0500, Jerome Glisse wrote:
> > On Mon, Nov 19, 2018 at 11:53:33AM -0700, Jason Gunthorpe wrote:
> > > On Mon, Nov 19, 2018 at 01:42:16PM -0500, Jerome Glisse wrote:
> > > > On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote:
> > > > > On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:
> > > > > 
> > > > > > Just to comment on this, any infiniband driver which use umem and do
> > > > > > not have ODP (here ODP for me means listening to mmu notifier so all
> > > > > > infiniband driver except mlx5) will be affected by same issue 
> > > > > > AFAICT.
> > > > > > 
> > > > > > AFAICT there is no special thing happening after fork() inside any 
> > > > > > of
> > > > > > those driver. So if parent create a umem mr before fork() and 
> > > > > > program
> > > > > > hardware with it then after fork() the parent might start using new
> > > > > > page for the umem range while the old memory is use by the child. 
> > > > > > The
> > > > > > reverse is also true (parent using old memory and child new memory)
> > > > > > bottom line you can not predict which memory the child or the parent
> > > > > > will use for the range after fork().
> > > > > > 
> > > > > > So no matter what you consider the child or the parent, what the hw
> > > > > > will use for the mr is unlikely to match what the CPU use for the
> > > > > > same virtual address. In other word:
> > > > > > 
> > > > > > Before fork:
> > > > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > > > 
> > > > > > Case 1:
> > > > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > > > > CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> > > > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > > > 
> > > > > > Case 2:
> > > > > > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> > > > > > CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> > > > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > > 
> > > > > IIRC this is solved in IB by automatically calling
> > > > > madvise(MADV_DONTFORK) before creating the MR.
> > > > > 
> > > > > MADV_DONTFORK
> > > > >   .. This is useful to prevent copy-on-write semantics from changing 
> > > > > the
> > > > >   physical location of a page if the parent writes to it after a
> > > > >   fork(2) ..
> > > > 
> > > > This would work around the issue but this is not transparent ie
> > > > range marked with DONTFORK no longer behave as expected from the
> > > > application point of view.
> > > 
> > > Do you know what the difference is? The man page really gives no
> > > hint..
> > > 
> > > Does it sometimes unmap the pages during fork?
> > 
> > It is handled in kernel/fork.c look for DONTCOPY, basicaly it just
> > leave empty page table in the child process so child will have to
> > fault in new page. This also means that child will get 0 as initial
> > value for all memory address under DONTCOPY/DONTFORK which breaks
> > application expectation of what fork() do.
> 
> Hum, I wonder why this API was selected then..

Because there is nothing else ? :)

> 
> > > I actually wonder if the kernel is a bit broken here, we have the same
> > > problem with O_DIRECT and other stuff, right?
> > 
> > No it is not, O_DIRECT is fine. The only corner case i can think
> > of with O_DIRECT is one thread launching an O_DIRECT that write
> > to private anonymous memory (other O_DIRECT case do not matter)
> > while another thread call fork() then what the child get can be
> > undefined ie either it get the data before the O_DIRECT finish
> > or it gets the result of the O_DIRECT. But this is realy what
> > you should expect when doing such thing without synchronization.
> > 
> > So O_DIRECT is fine.
> 
> ?? How can O_DIRECT be fine but RDMA not? They use exactly the same
> get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
> be fine too?
> 
> AFAIK the only difference is the length of the race window. You'd have
> to fork and fault during the shorter time O_DIRECT has get_user_pages
> open.

Well in O_DIRECT case there is only one page table, the CPU
page table and it gets updated during fork() so there is an
ordering there and the race window is small.

More over programmer knows that can get in trouble if they
do thing like fork() and don't synchronize their threads
with each other. So while some weird thing can happen with
O_DIRECT, it is unlikely (very small race window) and if
it happens its well within the expected behavior.

For hardware the race window is the same as the process
lifetime so it can be days, months, years ... Once the
hardware has programmed its page table they will never
see any update (again mlx5 ODP is the exception here).

This is where "issues" weird behavior can arise. Because

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 02:17:21PM -0500, Jerome Glisse wrote:
> On Mon, Nov 19, 2018 at 11:53:33AM -0700, Jason Gunthorpe wrote:
> > On Mon, Nov 19, 2018 at 01:42:16PM -0500, Jerome Glisse wrote:
> > > On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote:
> > > > On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:
> > > > 
> > > > > Just to comment on this, any infiniband driver which use umem and do
> > > > > not have ODP (here ODP for me means listening to mmu notifier so all
> > > > > infiniband driver except mlx5) will be affected by same issue AFAICT.
> > > > > 
> > > > > AFAICT there is no special thing happening after fork() inside any of
> > > > > those driver. So if parent create a umem mr before fork() and program
> > > > > hardware with it then after fork() the parent might start using new
> > > > > page for the umem range while the old memory is use by the child. The
> > > > > reverse is also true (parent using old memory and child new memory)
> > > > > bottom line you can not predict which memory the child or the parent
> > > > > will use for the range after fork().
> > > > > 
> > > > > So no matter what you consider the child or the parent, what the hw
> > > > > will use for the mr is unlikely to match what the CPU use for the
> > > > > same virtual address. In other word:
> > > > > 
> > > > > Before fork:
> > > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > > 
> > > > > Case 1:
> > > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > > > CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> > > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > > 
> > > > > Case 2:
> > > > > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> > > > > CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> > > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > 
> > > > IIRC this is solved in IB by automatically calling
> > > > madvise(MADV_DONTFORK) before creating the MR.
> > > > 
> > > > MADV_DONTFORK
> > > >   .. This is useful to prevent copy-on-write semantics from changing the
> > > >   physical location of a page if the parent writes to it after a
> > > >   fork(2) ..
> > > 
> > > This would work around the issue but this is not transparent ie
> > > range marked with DONTFORK no longer behave as expected from the
> > > application point of view.
> > 
> > Do you know what the difference is? The man page really gives no
> > hint..
> > 
> > Does it sometimes unmap the pages during fork?
> 
> It is handled in kernel/fork.c look for DONTCOPY, basicaly it just
> leave empty page table in the child process so child will have to
> fault in new page. This also means that child will get 0 as initial
> value for all memory address under DONTCOPY/DONTFORK which breaks
> application expectation of what fork() do.

Hum, I wonder why this API was selected then..

> > I actually wonder if the kernel is a bit broken here, we have the same
> > problem with O_DIRECT and other stuff, right?
> 
> No it is not, O_DIRECT is fine. The only corner case i can think
> of with O_DIRECT is one thread launching an O_DIRECT that write
> to private anonymous memory (other O_DIRECT case do not matter)
> while another thread call fork() then what the child get can be
> undefined ie either it get the data before the O_DIRECT finish
> or it gets the result of the O_DIRECT. But this is realy what
> you should expect when doing such thing without synchronization.
> 
> So O_DIRECT is fine.

?? How can O_DIRECT be fine but RDMA not? They use exactly the same
get_user_pages flow, right? Can we do what O_DIRECT does in RDMA and
be fine too?

AFAIK the only difference is the length of the race window. You'd have
to fork and fault during the shorter time O_DIRECT has get_user_pages
open.

> > Really, if I have a get_user_pages FOLL_WRITE on a page and we fork,
> > then shouldn't the COW immediately be broken during the fork?
> > 
> > The kernel can't guarentee that an ongoing DMA will not write to those
> > pages, and it breaks the fork semantic to write to both processes.
> 
> Fixing that would incur a high cost: need to grow struct page, need
> to copy potentialy gigabyte of memory during fork() ... this would be
> a serious performance regression for many folks just to work around an
> abuse of device driver. So i don't think anything on that front would
> be welcome.

Why? Keep track in each mm if there are any active get_user_pages
FOLL_WRITE pages in the mm, if yes then sweep the VMAs and fix the
issue for the FOLL_WRITE pages.

John is already working on being able to detect pages under GUP, so it
seems like a small step..

Since nearly all cases of fork don't have a GUP FOLL_WRITE active
there would be no performance hit.

> umem without proper ODP and VFIO are the only bad user i know of (for
> VFIO you can argue 

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 07:19:04PM +, Christopher Lameter wrote:
> On Mon, 19 Nov 2018, Jerome Glisse wrote:
> 
> > > IIRC this is solved in IB by automatically calling
> > > madvise(MADV_DONTFORK) before creating the MR.
> > >
> > > MADV_DONTFORK
> > >   .. This is useful to prevent copy-on-write semantics from changing the
> > >   physical location of a page if the parent writes to it after a
> > >   fork(2) ..
> >
> > This would work around the issue but this is not transparent ie
> > range marked with DONTFORK no longer behave as expected from the
> > application point of view.
> 
> Why would anyone expect a range registered via MR behave as normal? Device
> I/O is going on into that range. Memory is already special.
> 
> 
> > Also it relies on userspace doing the right thing (which is not
> > something i usualy trust :)).
> 
> This has been established practice for 15 or so years in a couple of use
> cases. Again user space already has to be doing special things in order to
> handle RDMA is that area.

Yes RDMA as an existing historical track record and thus people
should now be aware of its limitation. What i am fighting against
is new addition to kernel that pretend to do SVA (share virtual
address) while their hardware is not realy doing SVA. SVA with
IOMMU and ATS/PASID is fine, SVA in software with device driver
that abide to mmu notifier is fine. Anything else is not.

So i am just worrying about new user and making sure they under-
stand what is happening and not sell to their user something
false.

Cheers,
Jérôme


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Christopher Lameter
On Mon, 19 Nov 2018, Jerome Glisse wrote:

> > IIRC this is solved in IB by automatically calling
> > madvise(MADV_DONTFORK) before creating the MR.
> >
> > MADV_DONTFORK
> >   .. This is useful to prevent copy-on-write semantics from changing the
> >   physical location of a page if the parent writes to it after a
> >   fork(2) ..
>
> This would work around the issue but this is not transparent ie
> range marked with DONTFORK no longer behave as expected from the
> application point of view.

Why would anyone expect a range registered via MR behave as normal? Device
I/O is going on into that range. Memory is already special.


> Also it relies on userspace doing the right thing (which is not
> something i usualy trust :)).

This has been established practice for 15 or so years in a couple of use
cases. Again user space already has to be doing special things in order to
handle RDMA is that area.




Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 11:53:33AM -0700, Jason Gunthorpe wrote:
> On Mon, Nov 19, 2018 at 01:42:16PM -0500, Jerome Glisse wrote:
> > On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote:
> > > On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:
> > > 
> > > > Just to comment on this, any infiniband driver which use umem and do
> > > > not have ODP (here ODP for me means listening to mmu notifier so all
> > > > infiniband driver except mlx5) will be affected by same issue AFAICT.
> > > > 
> > > > AFAICT there is no special thing happening after fork() inside any of
> > > > those driver. So if parent create a umem mr before fork() and program
> > > > hardware with it then after fork() the parent might start using new
> > > > page for the umem range while the old memory is use by the child. The
> > > > reverse is also true (parent using old memory and child new memory)
> > > > bottom line you can not predict which memory the child or the parent
> > > > will use for the range after fork().
> > > > 
> > > > So no matter what you consider the child or the parent, what the hw
> > > > will use for the mr is unlikely to match what the CPU use for the
> > > > same virtual address. In other word:
> > > > 
> > > > Before fork:
> > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > 
> > > > Case 1:
> > > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > > CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > > 
> > > > Case 2:
> > > > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> > > > CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> > > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > 
> > > IIRC this is solved in IB by automatically calling
> > > madvise(MADV_DONTFORK) before creating the MR.
> > > 
> > > MADV_DONTFORK
> > >   .. This is useful to prevent copy-on-write semantics from changing the
> > >   physical location of a page if the parent writes to it after a
> > >   fork(2) ..
> > 
> > This would work around the issue but this is not transparent ie
> > range marked with DONTFORK no longer behave as expected from the
> > application point of view.
> 
> Do you know what the difference is? The man page really gives no
> hint..
> 
> Does it sometimes unmap the pages during fork?

It is handled in kernel/fork.c look for DONTCOPY, basicaly it just
leave empty page table in the child process so child will have to
fault in new page. This also means that child will get 0 as initial
value for all memory address under DONTCOPY/DONTFORK which breaks
application expectation of what fork() do.

> 
> I actually wonder if the kernel is a bit broken here, we have the same
> problem with O_DIRECT and other stuff, right?

No it is not, O_DIRECT is fine. The only corner case i can think
of with O_DIRECT is one thread launching an O_DIRECT that write
to private anonymous memory (other O_DIRECT case do not matter)
while another thread call fork() then what the child get can be
undefined ie either it get the data before the O_DIRECT finish
or it gets the result of the O_DIRECT. But this is realy what
you should expect when doing such thing without synchronization.

So O_DIRECT is fine.

> 
> Really, if I have a get_user_pages FOLL_WRITE on a page and we fork,
> then shouldn't the COW immediately be broken during the fork?
> 
> The kernel can't guarentee that an ongoing DMA will not write to those
> pages, and it breaks the fork semantic to write to both processes.

Fixing that would incur a high cost: need to grow struct page, need
to copy potentialy gigabyte of memory during fork() ... this would be
a serious performance regression for many folks just to work around an
abuse of device driver. So i don't think anything on that front would
be welcome.

umem without proper ODP and VFIO are the only bad user i know of (for
VFIO you can argue that it is part of the API contract and thus that
it is not an abuse but it is not spell out loud in documentation). I
have been trying to push back on any people trying to push thing that
would make the same mistake or at least making sure they understand
what is happening.

What really need to happen is people fixing their hardware and do the
right thing (good software engineer versus evil hardware engineer ;))

Cheers,
Jérôme


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Leon Romanovsky
On Mon, Nov 19, 2018 at 01:42:16PM -0500, Jerome Glisse wrote:
> On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote:
> > On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:
> >
> > > Just to comment on this, any infiniband driver which use umem and do
> > > not have ODP (here ODP for me means listening to mmu notifier so all
> > > infiniband driver except mlx5) will be affected by same issue AFAICT.
> > >
> > > AFAICT there is no special thing happening after fork() inside any of
> > > those driver. So if parent create a umem mr before fork() and program
> > > hardware with it then after fork() the parent might start using new
> > > page for the umem range while the old memory is use by the child. The
> > > reverse is also true (parent using old memory and child new memory)
> > > bottom line you can not predict which memory the child or the parent
> > > will use for the range after fork().
> > >
> > > So no matter what you consider the child or the parent, what the hw
> > > will use for the mr is unlikely to match what the CPU use for the
> > > same virtual address. In other word:
> > >
> > > Before fork:
> > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > >
> > > Case 1:
> > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > >
> > > Case 2:
> > > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> > > CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> >
> > IIRC this is solved in IB by automatically calling
> > madvise(MADV_DONTFORK) before creating the MR.
> >
> > MADV_DONTFORK
> >   .. This is useful to prevent copy-on-write semantics from changing the
> >   physical location of a page if the parent writes to it after a
> >   fork(2) ..
>
> This would work around the issue but this is not transparent ie
> range marked with DONTFORK no longer behave as expected from the
> application point of view.
>
> Also it relies on userspace doing the right thing (which is not
> something i usualy trust :)).

The good thing that we didn't see anyone who succeeded to run
IB stack without our user space, which does right thing under
the hood :).

>
> Cheers,
> Jérôme


signature.asc
Description: PGP signature


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 01:42:16PM -0500, Jerome Glisse wrote:
> On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote:
> > On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:
> > 
> > > Just to comment on this, any infiniband driver which use umem and do
> > > not have ODP (here ODP for me means listening to mmu notifier so all
> > > infiniband driver except mlx5) will be affected by same issue AFAICT.
> > > 
> > > AFAICT there is no special thing happening after fork() inside any of
> > > those driver. So if parent create a umem mr before fork() and program
> > > hardware with it then after fork() the parent might start using new
> > > page for the umem range while the old memory is use by the child. The
> > > reverse is also true (parent using old memory and child new memory)
> > > bottom line you can not predict which memory the child or the parent
> > > will use for the range after fork().
> > > 
> > > So no matter what you consider the child or the parent, what the hw
> > > will use for the mr is unlikely to match what the CPU use for the
> > > same virtual address. In other word:
> > > 
> > > Before fork:
> > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > 
> > > Case 1:
> > > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > > CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > > 
> > > Case 2:
> > > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> > > CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> > > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > 
> > IIRC this is solved in IB by automatically calling
> > madvise(MADV_DONTFORK) before creating the MR.
> > 
> > MADV_DONTFORK
> >   .. This is useful to prevent copy-on-write semantics from changing the
> >   physical location of a page if the parent writes to it after a
> >   fork(2) ..
> 
> This would work around the issue but this is not transparent ie
> range marked with DONTFORK no longer behave as expected from the
> application point of view.

Do you know what the difference is? The man page really gives no
hint..

Does it sometimes unmap the pages during fork?

I actually wonder if the kernel is a bit broken here, we have the same
problem with O_DIRECT and other stuff, right?

Really, if I have a get_user_pages FOLL_WRITE on a page and we fork,
then shouldn't the COW immediately be broken during the fork?

The kernel can't guarentee that an ongoing DMA will not write to those
pages, and it breaks the fork semantic to write to both processes.

> Also it relies on userspace doing the right thing (which is not
> something i usualy trust :)).

Well, if they do it wrong they get to keep all the broken bits :)

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
 
> If the hardware cannot share page table with the CPU, we then need to have
> some way to change the device page table. This is what happen in ODP. It
> invalidates the page table in device upon mmu_notifier call back. But this 
> cannot
> solve the COW problem: if the user process A share a page P with device, and 
> A 
> forks a new process B, and it continue to write to the page. By COW, the
> process B will keep the page P, while A will get a new page P'. But you have
> no way to let the device know it should use P' rather than P.

Is this true? I thought mmu_notifiers covered all these cases.

The mm_notifier for A should fire if B causes the physical address of
A's pages to change via COW. 

And this causes the device page tables to re-synchronize.

> In WarpDrive/uacce, we make this simple. If you support IOMMU and it support
> SVM/SVA. Everything will be fine just like ODP implicit mode. And you don't 
> need
> to write any code for that. Because it has been done by IOMMU framework. If it

Looks like the IOMMU code uses mmu_notifier, so it is identical to
IB's ODP. The only difference is that IB tends to have the IOMMU page
table in the device, not in the CPU.

The only case I know if that is different is the new-fangled CAPI
stuff where the IOMMU can directly use the CPU's page table and the
IOMMU page table (in device or CPU) is eliminated.

Anyhow, I don't think a single instance of hardware should justify an
entire new subsystem. Subsystems are hard to make and without multiple
hardware examples there is no way to expect that it would cover any
future use cases.

If all your driver needs is to mmap some PCI bar space, route
interrupts and do DMA mapping then mediated VFIO is probably a good
choice. 

If it needs to do a bunch of other stuff, not related to PCI bar
space, interrupts and DMA mapping (ie special code for compression,
crypto, AI, whatever) then you should probably do what Jerome said and
make a drivers/char/hisillicon_foo_bar.c that exposes just what your
hardware does.

If you have networking involved in here then consider RDMA,
particularly if this functionality is already part of the same
hardware that the hns infiniband driver is servicing.

'computational MRs' are a reasonable approach to a side-car offload of
already existing RDMA support.

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 11:27:52AM -0700, Jason Gunthorpe wrote:
> On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:
> 
> > Just to comment on this, any infiniband driver which use umem and do
> > not have ODP (here ODP for me means listening to mmu notifier so all
> > infiniband driver except mlx5) will be affected by same issue AFAICT.
> > 
> > AFAICT there is no special thing happening after fork() inside any of
> > those driver. So if parent create a umem mr before fork() and program
> > hardware with it then after fork() the parent might start using new
> > page for the umem range while the old memory is use by the child. The
> > reverse is also true (parent using old memory and child new memory)
> > bottom line you can not predict which memory the child or the parent
> > will use for the range after fork().
> > 
> > So no matter what you consider the child or the parent, what the hw
> > will use for the mr is unlikely to match what the CPU use for the
> > same virtual address. In other word:
> > 
> > Before fork:
> > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > 
> > Case 1:
> > CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> > CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> > 
> > Case 2:
> > CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> > CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> > HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> 
> IIRC this is solved in IB by automatically calling
> madvise(MADV_DONTFORK) before creating the MR.
> 
> MADV_DONTFORK
>   .. This is useful to prevent copy-on-write semantics from changing the
>   physical location of a page if the parent writes to it after a
>   fork(2) ..

This would work around the issue but this is not transparent ie
range marked with DONTFORK no longer behave as expected from the
application point of view.

Also it relies on userspace doing the right thing (which is not
something i usualy trust :)).

Cheers,
Jérôme


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jason Gunthorpe
On Mon, Nov 19, 2018 at 11:48:54AM -0500, Jerome Glisse wrote:

> Just to comment on this, any infiniband driver which use umem and do
> not have ODP (here ODP for me means listening to mmu notifier so all
> infiniband driver except mlx5) will be affected by same issue AFAICT.
> 
> AFAICT there is no special thing happening after fork() inside any of
> those driver. So if parent create a umem mr before fork() and program
> hardware with it then after fork() the parent might start using new
> page for the umem range while the old memory is use by the child. The
> reverse is also true (parent using old memory and child new memory)
> bottom line you can not predict which memory the child or the parent
> will use for the range after fork().
> 
> So no matter what you consider the child or the parent, what the hw
> will use for the mr is unlikely to match what the CPU use for the
> same virtual address. In other word:
> 
> Before fork:
> CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> 
> Case 1:
> CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
> CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
> HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE
> 
> Case 2:
> CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
> CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
> HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE

IIRC this is solved in IB by automatically calling
madvise(MADV_DONTFORK) before creating the MR.

MADV_DONTFORK
  .. This is useful to prevent copy-on-write semantics from changing the
  physical location of a page if the parent writes to it after a
  fork(2) ..

Jason


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Jerome Glisse
On Mon, Nov 19, 2018 at 12:48:01PM +0200, Leon Romanovsky wrote:
> On Mon, Nov 19, 2018 at 05:19:10PM +0800, Kenneth Lee wrote:
> > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > > On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> > > > On Thu, Nov 15, 2018 at 04:51:09PM +0800, Kenneth Lee wrote:
> > > > > On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> > > > > > On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
> > > > > > > > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:

[...]

> > > > memory exposed to user is properly protected from security point of 
> > > > view.
> > > > 3. "stop using the page for a while for the copying" - I'm not fully
> > > > understand this claim, maybe this article will help you to better
> > > > describe : https://lwn.net/Articles/753027/
> > >
> > > This topic was being discussed in RFCv2. The key problem here is that:
> > >
> > > The device need to hold the memory for its own calculation, but the 
> > > CPU/software
> > > want to stop it for a while for synchronizing with disk or COW.
> > >
> > > If the hardware support SVM/SVA (Shared Virtual Memory/Address), it is 
> > > easy, the
> > > device share page table with CPU, the device will raise a page fault when 
> > > the
> > > CPU downgrade the PTE to read-only.
> > >
> > > If the hardware cannot share page table with the CPU, we then need to have
> > > some way to change the device page table. This is what happen in ODP. It
> > > invalidates the page table in device upon mmu_notifier call back. But 
> > > this cannot
> > > solve the COW problem: if the user process A share a page P with device, 
> > > and A
> > > forks a new process B, and it continue to write to the page. By COW, the
> > > process B will keep the page P, while A will get a new page P'. But you 
> > > have
> > > no way to let the device know it should use P' rather than P.
> 
> I didn't hear about such issue and we supported fork for a long time.
> 

Just to comment on this, any infiniband driver which use umem and do
not have ODP (here ODP for me means listening to mmu notifier so all
infiniband driver except mlx5) will be affected by same issue AFAICT.

AFAICT there is no special thing happening after fork() inside any of
those driver. So if parent create a umem mr before fork() and program
hardware with it then after fork() the parent might start using new
page for the umem range while the old memory is use by the child. The
reverse is also true (parent using old memory and child new memory)
bottom line you can not predict which memory the child or the parent
will use for the range after fork().

So no matter what you consider the child or the parent, what the hw
will use for the mr is unlikely to match what the CPU use for the
same virtual address. In other word:

Before fork:
CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE

Case 1:
CPU parent: virtual addr ptr1 -> physical address = 0xCAFE
CPU child:  virtual addr ptr1 -> physical address = 0xDEAD
HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE

Case 2:
CPU parent: virtual addr ptr1 -> physical address = 0xBEEF
CPU child:  virtual addr ptr1 -> physical address = 0xCAFE
HARDWARE:   virtual addr ptr1 -> physical address = 0xCAFE


This apply for every single page and is not predictable. This only
apply to private memory (mmap() with MAP_PRIVATE)

I am not familiar enough with RDMA user space API contract to know
if this is an issue or not.

Note that this can not be fix, no one should have done umem without
ODP like mlx5. For this to work properly you need sane hardware like
mlx5.

Cheers,
Jérôme


Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Leon Romanovsky
On Mon, Nov 19, 2018 at 05:19:10PM +0800, Kenneth Lee wrote:
> On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > Date: Mon, 19 Nov 2018 17:14:05 +0800
> > From: Kenneth Lee 
> > To: Leon Romanovsky 
> > CC: Tim Sell , linux-...@vger.kernel.org,
> >  Alexander Shishkin , Zaibo Xu
> >  , zhangfei@foxmail.com, linux...@huawei.com,
> >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> >  , Gavin Schenk , RDMA mailing
> >  list , Vinod Koul , Jason
> >  Gunthorpe , Doug Ledford , Uwe
> >  Kleine-König , David Kershner
> >  , Kenneth Lee , Johan
> >  Hovold , Cyrille Pitchen
> >  , Sagar Dharia
> >  , Jens Axboe ,
> >  guodong...@linaro.org, linux-netdev , Randy Dunlap
> >  , linux-ker...@vger.kernel.org, Zhou Wang
> >  , linux-crypto@vger.kernel.org, Philippe
> >  Ombredanne , Sanyog Kale ,
> >  "David S. Miller" ,
> >  linux-accelerat...@lists.ozlabs.org
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.5.21 (2010-09-15)
> > Message-ID: <20181119091405.GE157308@Turing-Arch-b>
> >
> > On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> > > Date: Thu, 15 Nov 2018 16:54:55 +0200
> > > From: Leon Romanovsky 
> > > To: Kenneth Lee 
> > > CC: Kenneth Lee , Tim Sell ,
> > >  linux-...@vger.kernel.org, Alexander Shishkin
> > >  , Zaibo Xu ,
> > >  zhangfei@foxmail.com, linux...@huawei.com, haojian.zhu...@linaro.org,
> > >  Christoph Lameter , Hao Fang , 
> > > Gavin
> > >  Schenk , RDMA mailing list
> > >  , Zhou Wang , Jason
> > >  Gunthorpe , Doug Ledford , Uwe
> > >  Kleine-König , David Kershner
> > >  , Johan Hovold , Cyrille
> > >  Pitchen , Sagar Dharia
> > >  , Jens Axboe ,
> > >  guodong...@linaro.org, linux-netdev , Randy 
> > > Dunlap
> > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > >  , Sanyog Kale , "David S.
> > >  Miller" , linux-accelerat...@lists.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.10.1 (2018-07-13)
> > > Message-ID: <20181115145455.gn3...@mtr-leonro.mtl.com>
> > >
> > > On Thu, Nov 15, 2018 at 04:51:09PM +0800, Kenneth Lee wrote:
> > > > On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> > > > > Date: Wed, 14 Nov 2018 18:00:17 +0200
> > > > > From: Leon Romanovsky 
> > > > > To: Kenneth Lee 
> > > > > CC: Tim Sell , linux-...@vger.kernel.org,
> > > > >  Alexander Shishkin , Zaibo Xu
> > > > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > > > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao 
> > > > > Fang
> > > > >  , Gavin Schenk , RDMA 
> > > > > mailing
> > > > >  list , Zhou Wang 
> > > > > ,
> > > > >  Jason Gunthorpe , Doug Ledford , 
> > > > > Uwe
> > > > >  Kleine-König , David Kershner
> > > > >  , Johan Hovold , Cyrille
> > > > >  Pitchen , Sagar Dharia
> > > > >  , Jens Axboe ,
> > > > >  guodong...@linaro.org, linux-netdev , Randy 
> > > > > Dunlap
> > > > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > > > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > > > >  , Sanyog Kale , 
> > > > > Kenneth Lee
> > > > >  , "David S. Miller" ,
> > > > >  linux-accelerat...@lists.ozlabs.org
> > > > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for 
> > > > > WarpDrive/uacce
> > > > > User-Agent: Mutt/1.10.1 (2018-07-13)
> > > > > Message-ID: <20181114160017.gi3...@mtr-leonro.mtl.com>
> > > > >
> > > > > On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
> > > > > >
> > > > > > 在 2018/11/13 上午8:23, Leon Romanovsky 写道:
> > > > > > > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> > > > > > > > From: Kenneth Lee 
> > > > > > > >
> > > > > > > > WarpDrive is a general accelerator framework for the user 
> > > > > > > > application to
> > > > > > > > access the hardware without going through the ker

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Kenneth Lee
On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> Date: Mon, 19 Nov 2018 17:14:05 +0800
> From: Kenneth Lee 
> To: Leon Romanovsky 
> CC: Tim Sell , linux-...@vger.kernel.org,
>  Alexander Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Vinod Koul , Jason
>  Gunthorpe , Doug Ledford , Uwe
>  Kleine-König , David Kershner
>  , Kenneth Lee , Johan
>  Hovold , Cyrille Pitchen
>  , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Zhou Wang
>  , linux-crypto@vger.kernel.org, Philippe
>  Ombredanne , Sanyog Kale ,
>  "David S. Miller" ,
>  linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.5.21 (2010-09-15)
> Message-ID: <20181119091405.GE157308@Turing-Arch-b>
> 
> On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> > Date: Thu, 15 Nov 2018 16:54:55 +0200
> > From: Leon Romanovsky 
> > To: Kenneth Lee 
> > CC: Kenneth Lee , Tim Sell ,
> >  linux-...@vger.kernel.org, Alexander Shishkin
> >  , Zaibo Xu ,
> >  zhangfei@foxmail.com, linux...@huawei.com, haojian.zhu...@linaro.org,
> >  Christoph Lameter , Hao Fang , Gavin
> >  Schenk , RDMA mailing list
> >  , Zhou Wang , Jason
> >  Gunthorpe , Doug Ledford , Uwe
> >  Kleine-König , David Kershner
> >  , Johan Hovold , Cyrille
> >  Pitchen , Sagar Dharia
> >  , Jens Axboe ,
> >  guodong...@linaro.org, linux-netdev , Randy Dunlap
> >  , linux-ker...@vger.kernel.org, Vinod Koul
> >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> >  , Sanyog Kale , "David S.
> >  Miller" , linux-accelerat...@lists.ozlabs.org
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.10.1 (2018-07-13)
> > Message-ID: <20181115145455.gn3...@mtr-leonro.mtl.com>
> > 
> > On Thu, Nov 15, 2018 at 04:51:09PM +0800, Kenneth Lee wrote:
> > > On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> > > > Date: Wed, 14 Nov 2018 18:00:17 +0200
> > > > From: Leon Romanovsky 
> > > > To: Kenneth Lee 
> > > > CC: Tim Sell , linux-...@vger.kernel.org,
> > > >  Alexander Shishkin , Zaibo Xu
> > > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > > >  , Gavin Schenk , RDMA 
> > > > mailing
> > > >  list , Zhou Wang ,
> > > >  Jason Gunthorpe , Doug Ledford , 
> > > > Uwe
> > > >  Kleine-König , David Kershner
> > > >  , Johan Hovold , Cyrille
> > > >  Pitchen , Sagar Dharia
> > > >  , Jens Axboe ,
> > > >  guodong...@linaro.org, linux-netdev , Randy 
> > > > Dunlap
> > > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > > >  , Sanyog Kale , Kenneth 
> > > > Lee
> > > >  , "David S. Miller" ,
> > > >  linux-accelerat...@lists.ozlabs.org
> > > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > > User-Agent: Mutt/1.10.1 (2018-07-13)
> > > > Message-ID: <20181114160017.gi3...@mtr-leonro.mtl.com>
> > > >
> > > > On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
> > > > >
> > > > > 在 2018/11/13 上午8:23, Leon Romanovsky 写道:
> > > > > > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> > > > > > > From: Kenneth Lee 
> > > > > > >
> > > > > > > WarpDrive is a general accelerator framework for the user 
> > > > > > > application to
> > > > > > > access the hardware without going through the kernel in data path.
> > > > > > >
> > > > > > > The kernel component to provide kernel facility to driver for 
> > > > > > > expose the
> > > > > > > user interface is called uacce. It a short name for
> > > > > > > "Unified/User-space-access-intended Accelerator Framework".
> > > > > > >
> > > > > > > This patch add document to explain how it works.
> > > > > > + RDMA and netdev folks
> > > > > >
> > > > > 

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-19 Thread Kenneth Lee
On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> Date: Thu, 15 Nov 2018 16:54:55 +0200
> From: Leon Romanovsky 
> To: Kenneth Lee 
> CC: Kenneth Lee , Tim Sell ,
>  linux-...@vger.kernel.org, Alexander Shishkin
>  , Zaibo Xu ,
>  zhangfei@foxmail.com, linux...@huawei.com, haojian.zhu...@linaro.org,
>  Christoph Lameter , Hao Fang , Gavin
>  Schenk , RDMA mailing list
>  , Zhou Wang , Jason
>  Gunthorpe , Doug Ledford , Uwe
>  Kleine-König , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , "David S.
>  Miller" , linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.10.1 (2018-07-13)
> Message-ID: <20181115145455.gn3...@mtr-leonro.mtl.com>
> 
> On Thu, Nov 15, 2018 at 04:51:09PM +0800, Kenneth Lee wrote:
> > On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> > > Date: Wed, 14 Nov 2018 18:00:17 +0200
> > > From: Leon Romanovsky 
> > > To: Kenneth Lee 
> > > CC: Tim Sell , linux-...@vger.kernel.org,
> > >  Alexander Shishkin , Zaibo Xu
> > >  , zhangfei@foxmail.com, linux...@huawei.com,
> > >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> > >  , Gavin Schenk , RDMA 
> > > mailing
> > >  list , Zhou Wang ,
> > >  Jason Gunthorpe , Doug Ledford , Uwe
> > >  Kleine-König , David Kershner
> > >  , Johan Hovold , Cyrille
> > >  Pitchen , Sagar Dharia
> > >  , Jens Axboe ,
> > >  guodong...@linaro.org, linux-netdev , Randy 
> > > Dunlap
> > >  , linux-ker...@vger.kernel.org, Vinod Koul
> > >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> > >  , Sanyog Kale , Kenneth 
> > > Lee
> > >  , "David S. Miller" ,
> > >  linux-accelerat...@lists.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.10.1 (2018-07-13)
> > > Message-ID: <20181114160017.gi3...@mtr-leonro.mtl.com>
> > >
> > > On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
> > > >
> > > > 在 2018/11/13 上午8:23, Leon Romanovsky 写道:
> > > > > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> > > > > > From: Kenneth Lee 
> > > > > >
> > > > > > WarpDrive is a general accelerator framework for the user 
> > > > > > application to
> > > > > > access the hardware without going through the kernel in data path.
> > > > > >
> > > > > > The kernel component to provide kernel facility to driver for 
> > > > > > expose the
> > > > > > user interface is called uacce. It a short name for
> > > > > > "Unified/User-space-access-intended Accelerator Framework".
> > > > > >
> > > > > > This patch add document to explain how it works.
> > > > > + RDMA and netdev folks
> > > > >
> > > > > Sorry, to be late in the game, I don't see other patches, but from
> > > > > the description below it seems like you are reinventing RDMA verbs
> > > > > model. I have hard time to see the differences in the proposed
> > > > > framework to already implemented in drivers/infiniband/* for the 
> > > > > kernel
> > > > > space and for the https://github.com/linux-rdma/rdma-core/ for the 
> > > > > user
> > > > > space parts.
> > > >
> > > > Thanks Leon,
> > > >
> > > > Yes, we tried to solve similar problem in RDMA. We also learned a lot 
> > > > from
> > > > the exist code of RDMA. But we we have to make a new one because we 
> > > > cannot
> > > > register accelerators such as AI operation, encryption or compression 
> > > > to the
> > > > RDMA framework:)
> > >
> > > Assuming that you did everything right and still failed to use RDMA
> > > framework, you was supposed to fix it and not to reinvent new exactly
> > > same one. It is how we develop kernel, by reusing existing code.
> >
> > Yes, but we don't force other system such as NIC or GPU into RDMA, do we?
> 
> You don't introduce new NIC or GPU, but proposing a

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-15 Thread Leon Romanovsky
On Thu, Nov 15, 2018 at 04:51:09PM +0800, Kenneth Lee wrote:
> On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> > Date: Wed, 14 Nov 2018 18:00:17 +0200
> > From: Leon Romanovsky 
> > To: Kenneth Lee 
> > CC: Tim Sell , linux-...@vger.kernel.org,
> >  Alexander Shishkin , Zaibo Xu
> >  , zhangfei@foxmail.com, linux...@huawei.com,
> >  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
> >  , Gavin Schenk , RDMA mailing
> >  list , Zhou Wang ,
> >  Jason Gunthorpe , Doug Ledford , Uwe
> >  Kleine-König , David Kershner
> >  , Johan Hovold , Cyrille
> >  Pitchen , Sagar Dharia
> >  , Jens Axboe ,
> >  guodong...@linaro.org, linux-netdev , Randy Dunlap
> >  , linux-ker...@vger.kernel.org, Vinod Koul
> >  , linux-crypto@vger.kernel.org, Philippe Ombredanne
> >  , Sanyog Kale , Kenneth Lee
> >  , "David S. Miller" ,
> >  linux-accelerat...@lists.ozlabs.org
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.10.1 (2018-07-13)
> > Message-ID: <20181114160017.gi3...@mtr-leonro.mtl.com>
> >
> > On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
> > >
> > > 在 2018/11/13 上午8:23, Leon Romanovsky 写道:
> > > > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> > > > > From: Kenneth Lee 
> > > > >
> > > > > WarpDrive is a general accelerator framework for the user application 
> > > > > to
> > > > > access the hardware without going through the kernel in data path.
> > > > >
> > > > > The kernel component to provide kernel facility to driver for expose 
> > > > > the
> > > > > user interface is called uacce. It a short name for
> > > > > "Unified/User-space-access-intended Accelerator Framework".
> > > > >
> > > > > This patch add document to explain how it works.
> > > > + RDMA and netdev folks
> > > >
> > > > Sorry, to be late in the game, I don't see other patches, but from
> > > > the description below it seems like you are reinventing RDMA verbs
> > > > model. I have hard time to see the differences in the proposed
> > > > framework to already implemented in drivers/infiniband/* for the kernel
> > > > space and for the https://github.com/linux-rdma/rdma-core/ for the user
> > > > space parts.
> > >
> > > Thanks Leon,
> > >
> > > Yes, we tried to solve similar problem in RDMA. We also learned a lot from
> > > the exist code of RDMA. But we we have to make a new one because we cannot
> > > register accelerators such as AI operation, encryption or compression to 
> > > the
> > > RDMA framework:)
> >
> > Assuming that you did everything right and still failed to use RDMA
> > framework, you was supposed to fix it and not to reinvent new exactly
> > same one. It is how we develop kernel, by reusing existing code.
>
> Yes, but we don't force other system such as NIC or GPU into RDMA, do we?

You don't introduce new NIC or GPU, but proposing another interface to
directly access HW memory and bypass kernel for the data path. This is
whole idea of RDMA and this is why it is already present in the kernel.

Various hardware devices are supported in our stack allow a ton of crazy
stuff, including GPUs interconnections and NIC functionalities.

>
> I assume you would not agree to register a zip accelerator to infiniband? :)

"infiniband" name in the "drivers/infiniband/" is legacy one and the
current code supports IB, RoCE, iWARP and OmniPath as a transport layers.
For a lone time, we wanted to rename that folder to be "drivers/rdma",
but didn't find enough brave men/women to do it, due to backport mess
for such move.

The addition of zip accelerator to RDMA is possible and depends on how
you will model such new functionality - new driver, or maybe new ULP.

>
> Further, I don't think it is wise to break an exist system (RDMA) to fulfill a
> totally new scenario. The better choice is to let them run in parallel for 
> some
> time and try to merge them accordingly.

Awesome, so please run your code out-of-tree for now and once you are ready
for submission let's try to merge it.

>
> >
> > >
> > > Another problem we tried to address is the way to pin the memory for dma
> > > operation. The RDMA way to pin the memory cannot avoid the page lost due 
> > > to
> > > copy-on-write operation during the memory is used by the device. T

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-15 Thread Kenneth Lee
On Wed, Nov 14, 2018 at 06:00:17PM +0200, Leon Romanovsky wrote:
> Date: Wed, 14 Nov 2018 18:00:17 +0200
> From: Leon Romanovsky 
> To: Kenneth Lee 
> CC: Tim Sell , linux-...@vger.kernel.org,
>  Alexander Shishkin , Zaibo Xu
>  , zhangfei@foxmail.com, linux...@huawei.com,
>  haojian.zhu...@linaro.org, Christoph Lameter , Hao Fang
>  , Gavin Schenk , RDMA mailing
>  list , Zhou Wang ,
>  Jason Gunthorpe , Doug Ledford , Uwe
>  Kleine-König , David Kershner
>  , Johan Hovold , Cyrille
>  Pitchen , Sagar Dharia
>  , Jens Axboe ,
>  guodong...@linaro.org, linux-netdev , Randy Dunlap
>  , linux-ker...@vger.kernel.org, Vinod Koul
>  , linux-crypto@vger.kernel.org, Philippe Ombredanne
>  , Sanyog Kale , Kenneth Lee
>  , "David S. Miller" ,
>  linux-accelerat...@lists.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.10.1 (2018-07-13)
> Message-ID: <20181114160017.gi3...@mtr-leonro.mtl.com>
> 
> On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
> >
> > 在 2018/11/13 上午8:23, Leon Romanovsky 写道:
> > > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> > > > From: Kenneth Lee 
> > > >
> > > > WarpDrive is a general accelerator framework for the user application to
> > > > access the hardware without going through the kernel in data path.
> > > >
> > > > The kernel component to provide kernel facility to driver for expose the
> > > > user interface is called uacce. It a short name for
> > > > "Unified/User-space-access-intended Accelerator Framework".
> > > >
> > > > This patch add document to explain how it works.
> > > + RDMA and netdev folks
> > >
> > > Sorry, to be late in the game, I don't see other patches, but from
> > > the description below it seems like you are reinventing RDMA verbs
> > > model. I have hard time to see the differences in the proposed
> > > framework to already implemented in drivers/infiniband/* for the kernel
> > > space and for the https://github.com/linux-rdma/rdma-core/ for the user
> > > space parts.
> >
> > Thanks Leon,
> >
> > Yes, we tried to solve similar problem in RDMA. We also learned a lot from
> > the exist code of RDMA. But we we have to make a new one because we cannot
> > register accelerators such as AI operation, encryption or compression to the
> > RDMA framework:)
> 
> Assuming that you did everything right and still failed to use RDMA
> framework, you was supposed to fix it and not to reinvent new exactly
> same one. It is how we develop kernel, by reusing existing code.

Yes, but we don't force other system such as NIC or GPU into RDMA, do we?

I assume you would not agree to register a zip accelerator to infiniband? :)

Further, I don't think it is wise to break an exist system (RDMA) to fulfill a
totally new scenario. The better choice is to let them run in parallel for some
time and try to merge them accordingly.

> 
> >
> > Another problem we tried to address is the way to pin the memory for dma
> > operation. The RDMA way to pin the memory cannot avoid the page lost due to
> > copy-on-write operation during the memory is used by the device. This may
> > not be important to RDMA library. But it is important to accelerator.
> 
> Such support exists in drivers/infiniband/ from late 2014 and
> it is called ODP (on demand paging).

I reviewed ODP and I think it is a solution bound to infiniband. It is part of
MR semantics and required a infiniband specific hook
(ucontext->invalidate_range()). And the hook requires the device to be able to
stop using the page for a while for the copying. It is ok for infiniband
(actually, only mlx5 uses it). I don't think most accelerators can support
this mode. But WarpDrive works fully on top of IOMMU interface, it has no this
limitation.

> 
> >
> > Hope this can help the understanding.
> 
> Yes, it helped me a lot.
> Now, I'm more than before convinced that this whole patchset shouldn't
> exist in the first place.

Then maybe you can tell me how I can register my accelerator to the user space?

> 
> To be clear, NAK.
> 
> Thanks
> 
> >
> > Cheers
> >
> > >
> > > Hard NAK from RDMA side.
> > >
> > > Thanks
> > >
> > > > Signed-off-by: Kenneth Lee 
> > > > ---
> > > >   Documentation/warpdrive/warpdrive.rst   | 260 +++
> > > >   Documentation/warpdrive/wd-arch.svg | 764 
> > > >   Documentation/warpdrive

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-14 Thread Leon Romanovsky
On Wed, Nov 14, 2018 at 10:58:09AM +0800, Kenneth Lee wrote:
>
> 在 2018/11/13 上午8:23, Leon Romanovsky 写道:
> > On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> > > From: Kenneth Lee 
> > >
> > > WarpDrive is a general accelerator framework for the user application to
> > > access the hardware without going through the kernel in data path.
> > >
> > > The kernel component to provide kernel facility to driver for expose the
> > > user interface is called uacce. It a short name for
> > > "Unified/User-space-access-intended Accelerator Framework".
> > >
> > > This patch add document to explain how it works.
> > + RDMA and netdev folks
> >
> > Sorry, to be late in the game, I don't see other patches, but from
> > the description below it seems like you are reinventing RDMA verbs
> > model. I have hard time to see the differences in the proposed
> > framework to already implemented in drivers/infiniband/* for the kernel
> > space and for the https://github.com/linux-rdma/rdma-core/ for the user
> > space parts.
>
> Thanks Leon,
>
> Yes, we tried to solve similar problem in RDMA. We also learned a lot from
> the exist code of RDMA. But we we have to make a new one because we cannot
> register accelerators such as AI operation, encryption or compression to the
> RDMA framework:)

Assuming that you did everything right and still failed to use RDMA
framework, you was supposed to fix it and not to reinvent new exactly
same one. It is how we develop kernel, by reusing existing code.

>
> Another problem we tried to address is the way to pin the memory for dma
> operation. The RDMA way to pin the memory cannot avoid the page lost due to
> copy-on-write operation during the memory is used by the device. This may
> not be important to RDMA library. But it is important to accelerator.

Such support exists in drivers/infiniband/ from late 2014 and
it is called ODP (on demand paging).

>
> Hope this can help the understanding.

Yes, it helped me a lot.
Now, I'm more than before convinced that this whole patchset shouldn't
exist in the first place.

To be clear, NAK.

Thanks

>
> Cheers
>
> >
> > Hard NAK from RDMA side.
> >
> > Thanks
> >
> > > Signed-off-by: Kenneth Lee 
> > > ---
> > >   Documentation/warpdrive/warpdrive.rst   | 260 +++
> > >   Documentation/warpdrive/wd-arch.svg | 764 
> > >   Documentation/warpdrive/wd.svg  | 526 ++
> > >   Documentation/warpdrive/wd_q_addr_space.svg | 359 +
> > >   4 files changed, 1909 insertions(+)
> > >   create mode 100644 Documentation/warpdrive/warpdrive.rst
> > >   create mode 100644 Documentation/warpdrive/wd-arch.svg
> > >   create mode 100644 Documentation/warpdrive/wd.svg
> > >   create mode 100644 Documentation/warpdrive/wd_q_addr_space.svg
> > >
> > > diff --git a/Documentation/warpdrive/warpdrive.rst 
> > > b/Documentation/warpdrive/warpdrive.rst
> > > new file mode 100644
> > > index ..ef84d3a2d462
> > > --- /dev/null
> > > +++ b/Documentation/warpdrive/warpdrive.rst
> > > @@ -0,0 +1,260 @@
> > > +Introduction of WarpDrive
> > > +=
> > > +
> > > +*WarpDrive* is a general accelerator framework for the user application 
> > > to
> > > +access the hardware without going through the kernel in data path.
> > > +
> > > +It can be used as the quick channel for accelerators, network adaptors or
> > > +other hardware for application in user space.
> > > +
> > > +This may make some implementation simpler.  E.g.  you can reuse most of 
> > > the
> > > +*netdev* driver in kernel and just share some ring buffer to the user 
> > > space
> > > +driver for *DPDK* [4] or *ODP* [5]. Or you can combine the RSA 
> > > accelerator with
> > > +the *netdev* in the user space as a https reversed proxy, etc.
> > > +
> > > +*WarpDrive* takes the hardware accelerator as a heterogeneous processor 
> > > which
> > > +can share particular load from the CPU:
> > > +
> > > +.. image:: wd.svg
> > > +:alt: WarpDrive Concept
> > > +
> > > +The virtual concept, queue, is used to manage the requests sent to the
> > > +accelerator. The application send requests to the queue by writing to 
> > > some
> > > +particular address, while the hardware takes the requests directly from 
> > > the
> > > +address and send feedback accordingly.
> > > +
> > > +The format of the queue may differ from hardware to hardware. But the
> > > +application need not to make any system call for the communication.
> > > +
> > > +*WarpDrive* tries to create a shared virtual address space for all 
> > > involved
> > > +accelerators. Within this space, the requests sent to queue can refer to 
> > > any
> > > +virtual address, which will be valid to the application and all involved
> > > +accelerators.
> > > +
> > > +The name *WarpDrive* is simply a cool and general name meaning the 
> > > framework
> > > +makes the application faster. It includes general user library, kernel
> > > +management module and drivers f

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-13 Thread Kenneth Lee



在 2018/11/13 上午8:23, Leon Romanovsky 写道:

On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:

From: Kenneth Lee 

WarpDrive is a general accelerator framework for the user application to
access the hardware without going through the kernel in data path.

The kernel component to provide kernel facility to driver for expose the
user interface is called uacce. It a short name for
"Unified/User-space-access-intended Accelerator Framework".

This patch add document to explain how it works.

+ RDMA and netdev folks

Sorry, to be late in the game, I don't see other patches, but from
the description below it seems like you are reinventing RDMA verbs
model. I have hard time to see the differences in the proposed
framework to already implemented in drivers/infiniband/* for the kernel
space and for the https://github.com/linux-rdma/rdma-core/ for the user
space parts.


Thanks Leon,

Yes, we tried to solve similar problem in RDMA. We also learned a lot 
from the exist code of RDMA. But we we have to make a new one because we 
cannot register accelerators such as AI operation, encryption or 
compression to the RDMA framework:)


Another problem we tried to address is the way to pin the memory for dma 
operation. The RDMA way to pin the memory cannot avoid the page lost due 
to copy-on-write operation during the memory is used by the device. This 
may not be important to RDMA library. But it is important to accelerator.


Hope this can help the understanding.

Cheers



Hard NAK from RDMA side.

Thanks


Signed-off-by: Kenneth Lee 
---
  Documentation/warpdrive/warpdrive.rst   | 260 +++
  Documentation/warpdrive/wd-arch.svg | 764 
  Documentation/warpdrive/wd.svg  | 526 ++
  Documentation/warpdrive/wd_q_addr_space.svg | 359 +
  4 files changed, 1909 insertions(+)
  create mode 100644 Documentation/warpdrive/warpdrive.rst
  create mode 100644 Documentation/warpdrive/wd-arch.svg
  create mode 100644 Documentation/warpdrive/wd.svg
  create mode 100644 Documentation/warpdrive/wd_q_addr_space.svg

diff --git a/Documentation/warpdrive/warpdrive.rst 
b/Documentation/warpdrive/warpdrive.rst
new file mode 100644
index ..ef84d3a2d462
--- /dev/null
+++ b/Documentation/warpdrive/warpdrive.rst
@@ -0,0 +1,260 @@
+Introduction of WarpDrive
+=
+
+*WarpDrive* is a general accelerator framework for the user application to
+access the hardware without going through the kernel in data path.
+
+It can be used as the quick channel for accelerators, network adaptors or
+other hardware for application in user space.
+
+This may make some implementation simpler.  E.g.  you can reuse most of the
+*netdev* driver in kernel and just share some ring buffer to the user space
+driver for *DPDK* [4] or *ODP* [5]. Or you can combine the RSA accelerator with
+the *netdev* in the user space as a https reversed proxy, etc.
+
+*WarpDrive* takes the hardware accelerator as a heterogeneous processor which
+can share particular load from the CPU:
+
+.. image:: wd.svg
+:alt: WarpDrive Concept
+
+The virtual concept, queue, is used to manage the requests sent to the
+accelerator. The application send requests to the queue by writing to some
+particular address, while the hardware takes the requests directly from the
+address and send feedback accordingly.
+
+The format of the queue may differ from hardware to hardware. But the
+application need not to make any system call for the communication.
+
+*WarpDrive* tries to create a shared virtual address space for all involved
+accelerators. Within this space, the requests sent to queue can refer to any
+virtual address, which will be valid to the application and all involved
+accelerators.
+
+The name *WarpDrive* is simply a cool and general name meaning the framework
+makes the application faster. It includes general user library, kernel
+management module and drivers for the hardware. In kernel, the management
+module is called *uacce*, meaning "Unified/User-space-access-intended
+Accelerator Framework".
+
+
+How does it work
+
+
+*WarpDrive* uses *mmap* and *IOMMU* to play the trick.
+
+*Uacce* creates a chrdev for the device registered to it. A "queue" will be
+created when the chrdev is opened. The application access the queue by mmap
+different address region of the queue file.
+
+The following figure demonstrated the queue file address space:
+
+.. image:: wd_q_addr_space.svg
+:alt: WarpDrive Queue Address Space
+
+The first region of the space, device region, is used for the application to
+write request or read answer to or from the hardware.
+
+Normally, there can be three types of device regions mmio and memory regions.
+It is recommended to use common memory for request/answer descriptors and use
+the mmio space for device notification, such as doorbell. But of course, this
+is all up to the interface designer.
+
+There can be two types of device memory re

Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-12 Thread Leon Romanovsky
On Mon, Nov 12, 2018 at 03:58:02PM +0800, Kenneth Lee wrote:
> From: Kenneth Lee 
>
> WarpDrive is a general accelerator framework for the user application to
> access the hardware without going through the kernel in data path.
>
> The kernel component to provide kernel facility to driver for expose the
> user interface is called uacce. It a short name for
> "Unified/User-space-access-intended Accelerator Framework".
>
> This patch add document to explain how it works.

+ RDMA and netdev folks

Sorry, to be late in the game, I don't see other patches, but from
the description below it seems like you are reinventing RDMA verbs
model. I have hard time to see the differences in the proposed
framework to already implemented in drivers/infiniband/* for the kernel
space and for the https://github.com/linux-rdma/rdma-core/ for the user
space parts.

Hard NAK from RDMA side.

Thanks

>
> Signed-off-by: Kenneth Lee 
> ---
>  Documentation/warpdrive/warpdrive.rst   | 260 +++
>  Documentation/warpdrive/wd-arch.svg | 764 
>  Documentation/warpdrive/wd.svg  | 526 ++
>  Documentation/warpdrive/wd_q_addr_space.svg | 359 +
>  4 files changed, 1909 insertions(+)
>  create mode 100644 Documentation/warpdrive/warpdrive.rst
>  create mode 100644 Documentation/warpdrive/wd-arch.svg
>  create mode 100644 Documentation/warpdrive/wd.svg
>  create mode 100644 Documentation/warpdrive/wd_q_addr_space.svg
>
> diff --git a/Documentation/warpdrive/warpdrive.rst 
> b/Documentation/warpdrive/warpdrive.rst
> new file mode 100644
> index ..ef84d3a2d462
> --- /dev/null
> +++ b/Documentation/warpdrive/warpdrive.rst
> @@ -0,0 +1,260 @@
> +Introduction of WarpDrive
> +=
> +
> +*WarpDrive* is a general accelerator framework for the user application to
> +access the hardware without going through the kernel in data path.
> +
> +It can be used as the quick channel for accelerators, network adaptors or
> +other hardware for application in user space.
> +
> +This may make some implementation simpler.  E.g.  you can reuse most of the
> +*netdev* driver in kernel and just share some ring buffer to the user space
> +driver for *DPDK* [4] or *ODP* [5]. Or you can combine the RSA accelerator 
> with
> +the *netdev* in the user space as a https reversed proxy, etc.
> +
> +*WarpDrive* takes the hardware accelerator as a heterogeneous processor which
> +can share particular load from the CPU:
> +
> +.. image:: wd.svg
> +:alt: WarpDrive Concept
> +
> +The virtual concept, queue, is used to manage the requests sent to the
> +accelerator. The application send requests to the queue by writing to some
> +particular address, while the hardware takes the requests directly from the
> +address and send feedback accordingly.
> +
> +The format of the queue may differ from hardware to hardware. But the
> +application need not to make any system call for the communication.
> +
> +*WarpDrive* tries to create a shared virtual address space for all involved
> +accelerators. Within this space, the requests sent to queue can refer to any
> +virtual address, which will be valid to the application and all involved
> +accelerators.
> +
> +The name *WarpDrive* is simply a cool and general name meaning the framework
> +makes the application faster. It includes general user library, kernel
> +management module and drivers for the hardware. In kernel, the management
> +module is called *uacce*, meaning "Unified/User-space-access-intended
> +Accelerator Framework".
> +
> +
> +How does it work
> +
> +
> +*WarpDrive* uses *mmap* and *IOMMU* to play the trick.
> +
> +*Uacce* creates a chrdev for the device registered to it. A "queue" will be
> +created when the chrdev is opened. The application access the queue by mmap
> +different address region of the queue file.
> +
> +The following figure demonstrated the queue file address space:
> +
> +.. image:: wd_q_addr_space.svg
> +:alt: WarpDrive Queue Address Space
> +
> +The first region of the space, device region, is used for the application to
> +write request or read answer to or from the hardware.
> +
> +Normally, there can be three types of device regions mmio and memory regions.
> +It is recommended to use common memory for request/answer descriptors and use
> +the mmio space for device notification, such as doorbell. But of course, this
> +is all up to the interface designer.
> +
> +There can be two types of device memory regions, kernel-only and user-shared.
> +This will be explained in the "kernel APIs" section.
> +
> +The Static Share Virtual Memory region is necessary only when the device 
> IOMMU
> +does not support "Share Virtual Memory". This will be explained after the
> +*IOMMU* idea.
> +
> +
> +Architecture
> +
> +
> +The full *WarpDrive* architecture is represented in the following class
> +diagram:
> +
> +.. image:: wd-arch.svg
> +:alt: WarpDrive Architecture
> +
> 

[RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

2018-11-12 Thread Kenneth Lee
From: Kenneth Lee 

WarpDrive is a general accelerator framework for the user application to
access the hardware without going through the kernel in data path.

The kernel component to provide kernel facility to driver for expose the
user interface is called uacce. It a short name for
"Unified/User-space-access-intended Accelerator Framework".

This patch add document to explain how it works.

Signed-off-by: Kenneth Lee 
---
 Documentation/warpdrive/warpdrive.rst   | 260 +++
 Documentation/warpdrive/wd-arch.svg | 764 
 Documentation/warpdrive/wd.svg  | 526 ++
 Documentation/warpdrive/wd_q_addr_space.svg | 359 +
 4 files changed, 1909 insertions(+)
 create mode 100644 Documentation/warpdrive/warpdrive.rst
 create mode 100644 Documentation/warpdrive/wd-arch.svg
 create mode 100644 Documentation/warpdrive/wd.svg
 create mode 100644 Documentation/warpdrive/wd_q_addr_space.svg

diff --git a/Documentation/warpdrive/warpdrive.rst 
b/Documentation/warpdrive/warpdrive.rst
new file mode 100644
index ..ef84d3a2d462
--- /dev/null
+++ b/Documentation/warpdrive/warpdrive.rst
@@ -0,0 +1,260 @@
+Introduction of WarpDrive
+=
+
+*WarpDrive* is a general accelerator framework for the user application to
+access the hardware without going through the kernel in data path.
+
+It can be used as the quick channel for accelerators, network adaptors or
+other hardware for application in user space.
+
+This may make some implementation simpler.  E.g.  you can reuse most of the
+*netdev* driver in kernel and just share some ring buffer to the user space
+driver for *DPDK* [4] or *ODP* [5]. Or you can combine the RSA accelerator with
+the *netdev* in the user space as a https reversed proxy, etc.
+
+*WarpDrive* takes the hardware accelerator as a heterogeneous processor which
+can share particular load from the CPU:
+
+.. image:: wd.svg
+:alt: WarpDrive Concept
+
+The virtual concept, queue, is used to manage the requests sent to the
+accelerator. The application send requests to the queue by writing to some
+particular address, while the hardware takes the requests directly from the
+address and send feedback accordingly.
+
+The format of the queue may differ from hardware to hardware. But the
+application need not to make any system call for the communication.
+
+*WarpDrive* tries to create a shared virtual address space for all involved
+accelerators. Within this space, the requests sent to queue can refer to any
+virtual address, which will be valid to the application and all involved
+accelerators.
+
+The name *WarpDrive* is simply a cool and general name meaning the framework
+makes the application faster. It includes general user library, kernel
+management module and drivers for the hardware. In kernel, the management
+module is called *uacce*, meaning "Unified/User-space-access-intended
+Accelerator Framework".
+
+
+How does it work
+
+
+*WarpDrive* uses *mmap* and *IOMMU* to play the trick.
+
+*Uacce* creates a chrdev for the device registered to it. A "queue" will be
+created when the chrdev is opened. The application access the queue by mmap
+different address region of the queue file.
+
+The following figure demonstrated the queue file address space:
+
+.. image:: wd_q_addr_space.svg
+:alt: WarpDrive Queue Address Space
+
+The first region of the space, device region, is used for the application to
+write request or read answer to or from the hardware.
+
+Normally, there can be three types of device regions mmio and memory regions.
+It is recommended to use common memory for request/answer descriptors and use
+the mmio space for device notification, such as doorbell. But of course, this
+is all up to the interface designer.
+
+There can be two types of device memory regions, kernel-only and user-shared.
+This will be explained in the "kernel APIs" section.
+
+The Static Share Virtual Memory region is necessary only when the device IOMMU
+does not support "Share Virtual Memory". This will be explained after the
+*IOMMU* idea.
+
+
+Architecture
+
+
+The full *WarpDrive* architecture is represented in the following class
+diagram:
+
+.. image:: wd-arch.svg
+:alt: WarpDrive Architecture
+
+
+The user API
+
+
+We adopt a polling style interface in the user space: ::
+
+int wd_request_queue(struct wd_queue *q);
+void wd_release_queue(struct wd_queue *q);
+
+int wd_send(struct wd_queue *q, void *req);
+int wd_recv(struct wd_queue *q, void **req);
+int wd_recv_sync(struct wd_queue *q, void **req);
+void wd_flush(struct wd_queue *q);
+
+wd_recv_sync() is a wrapper to its non-sync version. It will trapped into
+kernel and waits until the queue become available.
+
+If the queue do not support SVA/SVM. The following helper function
+can be used to create Static Virtual Share Memory: ::
+
+void *wd_preserve_share_m