>> Yes, that's why I used 'significant'. One good thing is that given resources
>> it can easily be done in parallel with other development, and will give
>> additional
>> insight of some form.
>
>Yup, well if someone wants to start working on an emulated RDMA device
>that actually simulates
>> Yes, that's why I used 'significant'. One good thing is that given resources
>> it can easily be done in parallel with other development, and will give
>> additional
>> insight of some form.
>
>Yup, well if someone wants to start working on an emulated RDMA device
>that actually simulates
> My first reflex when reading this thread was to think that this whole domain
> lends it self excellently to testing via Qemu. Could it be that doing this in
> the opposite direction might be a safer approach in the long run even though
> (significant) more work up-front?
While the idea of
> My first reflex when reading this thread was to think that this whole domain
> lends it self excellently to testing via Qemu. Could it be that doing this in
> the opposite direction might be a safer approach in the long run even though
> (significant) more work up-front?
While the idea of
On 25/04/17 12:30 AM, Knut Omang wrote:
> Yes, that's why I used 'significant'. One good thing is that given resources
> it can easily be done in parallel with other development, and will give
> additional
> insight of some form.
Yup, well if someone wants to start working on an emulated RDMA
On 25/04/17 12:30 AM, Knut Omang wrote:
> Yes, that's why I used 'significant'. One good thing is that given resources
> it can easily be done in parallel with other development, and will give
> additional
> insight of some form.
Yup, well if someone wants to start working on an emulated RDMA
On Mon, 2017-04-24 at 10:14 -0600, Logan Gunthorpe wrote:
>
> On 24/04/17 01:36 AM, Knut Omang wrote:
> > My first reflex when reading this thread was to think that this whole domain
> > lends it self excellently to testing via Qemu. Could it be that doing this
> > in
> > the opposite direction
On Mon, 2017-04-24 at 10:14 -0600, Logan Gunthorpe wrote:
>
> On 24/04/17 01:36 AM, Knut Omang wrote:
> > My first reflex when reading this thread was to think that this whole domain
> > lends it self excellently to testing via Qemu. Could it be that doing this
> > in
> > the opposite direction
On 24/04/17 01:36 AM, Knut Omang wrote:
> My first reflex when reading this thread was to think that this whole domain
> lends it self excellently to testing via Qemu. Could it be that doing this in
> the opposite direction might be a safer approach in the long run even though
> (significant)
On 24/04/17 01:36 AM, Knut Omang wrote:
> My first reflex when reading this thread was to think that this whole domain
> lends it self excellently to testing via Qemu. Could it be that doing this in
> the opposite direction might be a safer approach in the long run even though
> (significant)
On Mon, 2017-04-17 at 08:31 +1000, Benjamin Herrenschmidt wrote:
> On Sun, 2017-04-16 at 10:34 -0600, Logan Gunthorpe wrote:
> >
> > On 16/04/17 09:53 AM, Dan Williams wrote:
> > > ZONE_DEVICE allows you to redirect via get_dev_pagemap() to retrieve
> > > context about the physical address in
On Mon, 2017-04-17 at 08:31 +1000, Benjamin Herrenschmidt wrote:
> On Sun, 2017-04-16 at 10:34 -0600, Logan Gunthorpe wrote:
> >
> > On 16/04/17 09:53 AM, Dan Williams wrote:
> > > ZONE_DEVICE allows you to redirect via get_dev_pagemap() to retrieve
> > > context about the physical address in
On Thu, Apr 20, 2017 at 4:07 PM, Stephen Bates wrote:
>>> Yes, this makes sense I think we really just want to distinguish host
>>> memory or not in terms of the dev_pagemap type.
>>
>>> I would like to see mutually exclusive flags for host memory (or not) and
>>>
On Thu, Apr 20, 2017 at 4:07 PM, Stephen Bates wrote:
>>> Yes, this makes sense I think we really just want to distinguish host
>>> memory or not in terms of the dev_pagemap type.
>>
>>> I would like to see mutually exclusive flags for host memory (or not) and
>>> persistence (or not).
>>>
>>
>> Yes, this makes sense I think we really just want to distinguish host
>> memory or not in terms of the dev_pagemap type.
>
>> I would like to see mutually exclusive flags for host memory (or not) and
>> persistence (or not).
>>
>
> Why persistence? It has zero meaning to the mm.
I like the
>> Yes, this makes sense I think we really just want to distinguish host
>> memory or not in terms of the dev_pagemap type.
>
>> I would like to see mutually exclusive flags for host memory (or not) and
>> persistence (or not).
>>
>
> Why persistence? It has zero meaning to the mm.
I like the
On Thu, Apr 20, 2017 at 1:43 PM, Stephen Bates wrote:
>
>> Yes, this makes sense I think we really just want to distinguish host
>> memory or not in terms of the dev_pagemap type.
>
> I would like to see mutually exclusive flags for host memory (or not) and
> persistence
On Thu, Apr 20, 2017 at 1:43 PM, Stephen Bates wrote:
>
>> Yes, this makes sense I think we really just want to distinguish host
>> memory or not in terms of the dev_pagemap type.
>
> I would like to see mutually exclusive flags for host memory (or not) and
> persistence (or not).
>
Why
> Yes, this makes sense I think we really just want to distinguish host
> memory or not in terms of the dev_pagemap type.
I would like to see mutually exclusive flags for host memory (or not) and
persistence (or not).
Stephen
> Yes, this makes sense I think we really just want to distinguish host
> memory or not in terms of the dev_pagemap type.
I would like to see mutually exclusive flags for host memory (or not) and
persistence (or not).
Stephen
On Wed, Apr 19, 2017 at 3:55 PM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 02:48 PM, Jason Gunthorpe wrote:
>> On Wed, Apr 19, 2017 at 01:41:49PM -0600, Logan Gunthorpe wrote:
>>
But.. it could point to a GPU and the GPU struct device could have a
proxy dma_ops like
On Wed, Apr 19, 2017 at 3:55 PM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 02:48 PM, Jason Gunthorpe wrote:
>> On Wed, Apr 19, 2017 at 01:41:49PM -0600, Logan Gunthorpe wrote:
>>
But.. it could point to a GPU and the GPU struct device could have a
proxy dma_ops like Dan pointed out.
>>>
On 19/04/17 02:48 PM, Jason Gunthorpe wrote:
> On Wed, Apr 19, 2017 at 01:41:49PM -0600, Logan Gunthorpe wrote:
>
>>> But.. it could point to a GPU and the GPU struct device could have a
>>> proxy dma_ops like Dan pointed out.
>>
>> Seems a bit awkward to me that in order for the intended use
On 19/04/17 02:48 PM, Jason Gunthorpe wrote:
> On Wed, Apr 19, 2017 at 01:41:49PM -0600, Logan Gunthorpe wrote:
>
>>> But.. it could point to a GPU and the GPU struct device could have a
>>> proxy dma_ops like Dan pointed out.
>>
>> Seems a bit awkward to me that in order for the intended use
On Wed, Apr 19, 2017 at 01:41:49PM -0600, Logan Gunthorpe wrote:
> > But.. it could point to a GPU and the GPU struct device could have a
> > proxy dma_ops like Dan pointed out.
>
> Seems a bit awkward to me that in order for the intended use case, you
> have to proxy the dma_ops. I'd probably
On Wed, Apr 19, 2017 at 01:41:49PM -0600, Logan Gunthorpe wrote:
> > But.. it could point to a GPU and the GPU struct device could have a
> > proxy dma_ops like Dan pointed out.
>
> Seems a bit awkward to me that in order for the intended use case, you
> have to proxy the dma_ops. I'd probably
On 19/04/17 01:31 PM, Jason Gunthorpe wrote:
> Try it with VT-D turned on. It shouldn't work or there is a notable
> security hole in your platform..
Ah, ok.
>>> const struct dma_map_ops *comp_ops = get_dma_ops(completer);
>>> const struct dma_map_ops *init_ops =
On 19/04/17 01:31 PM, Jason Gunthorpe wrote:
> Try it with VT-D turned on. It shouldn't work or there is a notable
> security hole in your platform..
Ah, ok.
>>> const struct dma_map_ops *comp_ops = get_dma_ops(completer);
>>> const struct dma_map_ops *init_ops =
On Wed, Apr 19, 2017 at 01:02:49PM -0600, Logan Gunthorpe wrote:
>
>
> On 19/04/17 12:32 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote:
> > Not entirely, it would have to call through the whole process
> > including the arch_p2p_cross_segment()..
On Wed, Apr 19, 2017 at 01:02:49PM -0600, Logan Gunthorpe wrote:
>
>
> On 19/04/17 12:32 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote:
> > Not entirely, it would have to call through the whole process
> > including the arch_p2p_cross_segment()..
On 19/04/17 12:32 PM, Jason Gunthorpe wrote:
> On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote:
> Not entirely, it would have to call through the whole process
> including the arch_p2p_cross_segment()..
Hmm, yes. Though it's still not clear what, if anything,
On 19/04/17 12:32 PM, Jason Gunthorpe wrote:
> On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote:
> Not entirely, it would have to call through the whole process
> including the arch_p2p_cross_segment()..
Hmm, yes. Though it's still not clear what, if anything,
On Wed, Apr 19, 2017 at 11:41 AM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 12:30 PM, Dan Williams wrote:
>> Letting others users do the container_of() arrangement means that
>> struct page_map needs to become public and move into struct
>> dev_pagemap directly.
>
> Ah, yes, I
On Wed, Apr 19, 2017 at 11:41 AM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 12:30 PM, Dan Williams wrote:
>> Letting others users do the container_of() arrangement means that
>> struct page_map needs to become public and move into struct
>> dev_pagemap directly.
>
> Ah, yes, I got a bit turned
On 19/04/17 12:30 PM, Dan Williams wrote:
> Letting others users do the container_of() arrangement means that
> struct page_map needs to become public and move into struct
> dev_pagemap directly.
Ah, yes, I got a bit turned around by that and failed to notice that
page_map and dev_pagemap are
On 19/04/17 12:30 PM, Dan Williams wrote:
> Letting others users do the container_of() arrangement means that
> struct page_map needs to become public and move into struct
> dev_pagemap directly.
Ah, yes, I got a bit turned around by that and failed to notice that
page_map and dev_pagemap are
On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote:
> I'm just spit balling here but if HMM wanted to use unaddressable memory
> as a DMA target, it could set that function to create a window ine gpu
> memory, then call the pci_p2p_same_segment and return the result as the
> dma
On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote:
> I'm just spit balling here but if HMM wanted to use unaddressable memory
> as a DMA target, it could set that function to create a window ine gpu
> memory, then call the pci_p2p_same_segment and return the result as the
> dma
On Wed, Apr 19, 2017 at 11:19 AM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 12:11 PM, Logan Gunthorpe wrote:
>>
>>
>> On 19/04/17 11:41 AM, Dan Williams wrote:
>>> No, not quite ;-). I still don't think we should require the non-HMM
>>> to pass NULL for all the HMM arguments.
On Wed, Apr 19, 2017 at 11:19 AM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 12:11 PM, Logan Gunthorpe wrote:
>>
>>
>> On 19/04/17 11:41 AM, Dan Williams wrote:
>>> No, not quite ;-). I still don't think we should require the non-HMM
>>> to pass NULL for all the HMM arguments. What I like about
On 19/04/17 12:11 PM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 11:41 AM, Dan Williams wrote:
>> No, not quite ;-). I still don't think we should require the non-HMM
>> to pass NULL for all the HMM arguments. What I like about Logan's
>> proposal is to have a separate create and register steps
On 19/04/17 12:11 PM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 11:41 AM, Dan Williams wrote:
>> No, not quite ;-). I still don't think we should require the non-HMM
>> to pass NULL for all the HMM arguments. What I like about Logan's
>> proposal is to have a separate create and register steps
On 19/04/17 11:41 AM, Dan Williams wrote:
> No, not quite ;-). I still don't think we should require the non-HMM
> to pass NULL for all the HMM arguments. What I like about Logan's
> proposal is to have a separate create and register steps dev_pagemap.
> That way call paths that don't care about
On 19/04/17 11:41 AM, Dan Williams wrote:
> No, not quite ;-). I still don't think we should require the non-HMM
> to pass NULL for all the HMM arguments. What I like about Logan's
> proposal is to have a separate create and register steps dev_pagemap.
> That way call paths that don't care about
On 19/04/17 11:14 AM, Jason Gunthorpe wrote:
> I don't see a use for the dma_map function pointer at this point..
Yes, it is kind of like designing for the future. I just find it a
little odd calling the pci functions in the iommu.
> It doesn't make alot of sense for the completor of the DMA
On 19/04/17 11:14 AM, Jason Gunthorpe wrote:
> I don't see a use for the dma_map function pointer at this point..
Yes, it is kind of like designing for the future. I just find it a
little odd calling the pci functions in the iommu.
> It doesn't make alot of sense for the completor of the DMA
On Wed, Apr 19, 2017 at 10:32 AM, Jerome Glisse wrote:
> On Wed, Apr 19, 2017 at 10:01:23AM -0700, Dan Williams wrote:
>> On Wed, Apr 19, 2017 at 9:48 AM, Logan Gunthorpe wrote:
>> >
>> >
>> > On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
>> >> I was
On Wed, Apr 19, 2017 at 10:32 AM, Jerome Glisse wrote:
> On Wed, Apr 19, 2017 at 10:01:23AM -0700, Dan Williams wrote:
>> On Wed, Apr 19, 2017 at 9:48 AM, Logan Gunthorpe wrote:
>> >
>> >
>> > On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
>> >> I was thinking only this one would be supported with
On Wed, Apr 19, 2017 at 10:01:23AM -0700, Dan Williams wrote:
> On Wed, Apr 19, 2017 at 9:48 AM, Logan Gunthorpe wrote:
> >
> >
> > On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
> >> I was thinking only this one would be supported with a core code
> >> helper..
> >
> >
On Wed, Apr 19, 2017 at 10:01:23AM -0700, Dan Williams wrote:
> On Wed, Apr 19, 2017 at 9:48 AM, Logan Gunthorpe wrote:
> >
> >
> > On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
> >> I was thinking only this one would be supported with a core code
> >> helper..
> >
> > Pivoting slightly: I was
On Wed, Apr 19, 2017 at 10:48:51AM -0600, Logan Gunthorpe wrote:
> The pci_enable_p2p_bar function would then just need to call
> devm_memremap_pages with the dma_map callback set to a function that
> does the segment check and the offset calculation.
I don't see a use for the dma_map function
On Wed, Apr 19, 2017 at 10:48:51AM -0600, Logan Gunthorpe wrote:
> The pci_enable_p2p_bar function would then just need to call
> devm_memremap_pages with the dma_map callback set to a function that
> does the segment check and the offset calculation.
I don't see a use for the dma_map function
On Wed, Apr 19, 2017 at 9:48 AM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
>> I was thinking only this one would be supported with a core code
>> helper..
>
> Pivoting slightly: I was looking at how HMM uses ZONE_DEVICE. They add a
> type flag
On Wed, Apr 19, 2017 at 9:48 AM, Logan Gunthorpe wrote:
>
>
> On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
>> I was thinking only this one would be supported with a core code
>> helper..
>
> Pivoting slightly: I was looking at how HMM uses ZONE_DEVICE. They add a
> type flag to the dev_pagemap
On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
> I was thinking only this one would be supported with a core code
> helper..
Pivoting slightly: I was looking at how HMM uses ZONE_DEVICE. They add a
type flag to the dev_pagemap structure which would be very useful to us.
We could add another
On 19/04/17 09:55 AM, Jason Gunthorpe wrote:
> I was thinking only this one would be supported with a core code
> helper..
Pivoting slightly: I was looking at how HMM uses ZONE_DEVICE. They add a
type flag to the dev_pagemap structure which would be very useful to us.
We could add another
On Wed, Apr 19, 2017 at 11:20:06AM +1000, Benjamin Herrenschmidt wrote:
> That helper wouldn't perform the actual iommu mapping. It would simply
> return something along the lines of:
>
> - "use that alternate bus address and don't map in the iommu"
I was thinking only this one would be
On Wed, Apr 19, 2017 at 11:20:06AM +1000, Benjamin Herrenschmidt wrote:
> That helper wouldn't perform the actual iommu mapping. It would simply
> return something along the lines of:
>
> - "use that alternate bus address and don't map in the iommu"
I was thinking only this one would be
On Tue, 2017-04-18 at 16:24 -0600, Jason Gunthorpe wrote:
> Basically, all this list processing is a huge overhead compared to
> just putting a helper call in the existing sg iteration loop of the
> actual op. Particularly if the actual op is a no-op like no-mmu x86
> would use.
Yes, I'm leaning
On Tue, 2017-04-18 at 16:24 -0600, Jason Gunthorpe wrote:
> Basically, all this list processing is a huge overhead compared to
> just putting a helper call in the existing sg iteration loop of the
> actual op. Particularly if the actual op is a no-op like no-mmu x86
> would use.
Yes, I'm leaning
On Tue, 2017-04-18 at 17:21 -0600, Jason Gunthorpe wrote:
> Splitting the sgl is different from iommu batching.
>
> As an example, an O_DIRECT write of 1 MB with a single 4K P2P page in
> the middle.
>
> The optimum behavior is to allocate a 1MB-4K iommu range and fill it
> with the CPU memory.
On Tue, 2017-04-18 at 17:21 -0600, Jason Gunthorpe wrote:
> Splitting the sgl is different from iommu batching.
>
> As an example, an O_DIRECT write of 1 MB with a single 4K P2P page in
> the middle.
>
> The optimum behavior is to allocate a 1MB-4K iommu range and fill it
> with the CPU memory.
On Tue, 2017-04-18 at 15:22 -0600, Jason Gunthorpe wrote:
> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
> > > I think this opens an even bigger can of worms..
> >
> > No, I don't think it does. You'd only shim when the target page is
> > backed by a device, not host memory, and
On Tue, 2017-04-18 at 15:22 -0600, Jason Gunthorpe wrote:
> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
> > > I think this opens an even bigger can of worms..
> >
> > No, I don't think it does. You'd only shim when the target page is
> > backed by a device, not host memory, and
On Tue, 2017-04-18 at 15:03 -0600, Jason Gunthorpe wrote:
> I don't follow, when does get_dma_ops() return a p2p aware provider?
> It has no way to know if the DMA is going to involve p2p, get_dma_ops
> is called with the device initiating the DMA.
>
> So you'd always return the P2P shim on a
On Tue, 2017-04-18 at 15:03 -0600, Jason Gunthorpe wrote:
> I don't follow, when does get_dma_ops() return a p2p aware provider?
> It has no way to know if the DMA is going to involve p2p, get_dma_ops
> is called with the device initiating the DMA.
>
> So you'd always return the P2P shim on a
On Tue, 2017-04-18 at 14:48 -0600, Logan Gunthorpe wrote:
> > ...and that dma_map goes through get_dma_ops(), so I don't see the conflict?
>
> The main conflict is in dma_map_sg which only does get_dma_ops once but
> the sg may contain memory of different types.
We can handle that in our
On Tue, 2017-04-18 at 14:48 -0600, Logan Gunthorpe wrote:
> > ...and that dma_map goes through get_dma_ops(), so I don't see the conflict?
>
> The main conflict is in dma_map_sg which only does get_dma_ops once but
> the sg may contain memory of different types.
We can handle that in our
On Tue, 2017-04-18 at 12:00 -0600, Jason Gunthorpe wrote:
> - All platforms can succeed if the PCI devices are under the same
> 'segment', but where segments begin is somewhat platform specific
> knowledge. (this is 'same switch' idea Logan has talked about)
We also need to be careful whether
On Tue, 2017-04-18 at 12:00 -0600, Jason Gunthorpe wrote:
> - All platforms can succeed if the PCI devices are under the same
> 'segment', but where segments begin is somewhat platform specific
> knowledge. (this is 'same switch' idea Logan has talked about)
We also need to be careful whether
On Tue, 2017-04-18 at 10:27 -0700, Dan Williams wrote:
> > FWIW, RDMA probably wouldn't want to use a p2mem device either, we
> > already have APIs that map BAR memory to user space, and would like to
> > keep using them. A 'enable P2P for bar' helper function sounds better
> > to me.
>
> ...and
On Tue, 2017-04-18 at 10:27 -0700, Dan Williams wrote:
> > FWIW, RDMA probably wouldn't want to use a p2mem device either, we
> > already have APIs that map BAR memory to user space, and would like to
> > keep using them. A 'enable P2P for bar' helper function sounds better
> > to me.
>
> ...and
On Tue, Apr 18, 2017 at 03:51:27PM -0700, Dan Williams wrote:
> > This really seems like much less trouble than trying to wrapper all
> > the arch's dma ops, and doesn't have the wonky restrictions.
>
> I don't think the root bus iommu drivers have any business knowing or
> caring about dma
On Tue, Apr 18, 2017 at 03:51:27PM -0700, Dan Williams wrote:
> > This really seems like much less trouble than trying to wrapper all
> > the arch's dma ops, and doesn't have the wonky restrictions.
>
> I don't think the root bus iommu drivers have any business knowing or
> caring about dma
On 18/04/17 04:24 PM, Jason Gunthorpe wrote:
> Try and write a stacked map_sg function like you describe and you will
> see how horrible it quickly becomes.
Yes, unfortunately, I have to agree with this statement completely.
> Since dma mapping is a performance path we must be careful not to
>
On 18/04/17 04:24 PM, Jason Gunthorpe wrote:
> Try and write a stacked map_sg function like you describe and you will
> see how horrible it quickly becomes.
Yes, unfortunately, I have to agree with this statement completely.
> Since dma mapping is a performance path we must be careful not to
>
On Tue, Apr 18, 2017 at 3:56 PM, Logan Gunthorpe wrote:
>
>
> On 18/04/17 04:50 PM, Dan Williams wrote:
>> On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe wrote:
>>>
>>>
>>> On 18/04/17 04:28 PM, Dan Williams wrote:
Unlike the pci bus address
On Tue, Apr 18, 2017 at 3:56 PM, Logan Gunthorpe wrote:
>
>
> On 18/04/17 04:50 PM, Dan Williams wrote:
>> On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe wrote:
>>>
>>>
>>> On 18/04/17 04:28 PM, Dan Williams wrote:
Unlike the pci bus address offset case which I think is fundamental to
On 18/04/17 04:50 PM, Dan Williams wrote:
> On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe wrote:
>>
>>
>> On 18/04/17 04:28 PM, Dan Williams wrote:
>>> Unlike the pci bus address offset case which I think is fundamental to
>>> support since shipping archs do this today,
On 18/04/17 04:50 PM, Dan Williams wrote:
> On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe wrote:
>>
>>
>> On 18/04/17 04:28 PM, Dan Williams wrote:
>>> Unlike the pci bus address offset case which I think is fundamental to
>>> support since shipping archs do this today, I think it is ok to
On Tue, Apr 18, 2017 at 3:46 PM, Benjamin Herrenschmidt
wrote:
> On Tue, 2017-04-18 at 10:27 -0700, Dan Williams wrote:
>> > FWIW, RDMA probably wouldn't want to use a p2mem device either, we
>> > already have APIs that map BAR memory to user space, and would like to
>>
On Tue, Apr 18, 2017 at 3:46 PM, Benjamin Herrenschmidt
wrote:
> On Tue, 2017-04-18 at 10:27 -0700, Dan Williams wrote:
>> > FWIW, RDMA probably wouldn't want to use a p2mem device either, we
>> > already have APIs that map BAR memory to user space, and would like to
>> > keep using them. A
On Tue, Apr 18, 2017 at 3:42 PM, Jason Gunthorpe
wrote:
> On Tue, Apr 18, 2017 at 03:28:17PM -0700, Dan Williams wrote:
>
>> Unlike the pci bus address offset case which I think is fundamental to
>> support since shipping archs do this toda
>
> But we can support
On Tue, Apr 18, 2017 at 3:42 PM, Jason Gunthorpe
wrote:
> On Tue, Apr 18, 2017 at 03:28:17PM -0700, Dan Williams wrote:
>
>> Unlike the pci bus address offset case which I think is fundamental to
>> support since shipping archs do this toda
>
> But we can support this by modifying those arch's
On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe wrote:
>
>
> On 18/04/17 04:28 PM, Dan Williams wrote:
>> Unlike the pci bus address offset case which I think is fundamental to
>> support since shipping archs do this today, I think it is ok to say
>> p2p is restricted to a
On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe wrote:
>
>
> On 18/04/17 04:28 PM, Dan Williams wrote:
>> Unlike the pci bus address offset case which I think is fundamental to
>> support since shipping archs do this today, I think it is ok to say
>> p2p is restricted to a single sgl that gets
On 18/04/17 04:28 PM, Dan Williams wrote:
> Unlike the pci bus address offset case which I think is fundamental to
> support since shipping archs do this today, I think it is ok to say
> p2p is restricted to a single sgl that gets to talk to host memory or
> a single device. That said, what's
On 18/04/17 04:28 PM, Dan Williams wrote:
> Unlike the pci bus address offset case which I think is fundamental to
> support since shipping archs do this today, I think it is ok to say
> p2p is restricted to a single sgl that gets to talk to host memory or
> a single device. That said, what's
On Tue, Apr 18, 2017 at 03:28:17PM -0700, Dan Williams wrote:
> Unlike the pci bus address offset case which I think is fundamental to
> support since shipping archs do this toda
But we can support this by modifying those arch's unique dma_ops
directly.
Eg as I explained, my
On Tue, Apr 18, 2017 at 03:28:17PM -0700, Dan Williams wrote:
> Unlike the pci bus address offset case which I think is fundamental to
> support since shipping archs do this toda
But we can support this by modifying those arch's unique dma_ops
directly.
Eg as I explained, my
On Tue, Apr 18, 2017 at 3:15 PM, Logan Gunthorpe wrote:
>
>
> On 18/04/17 03:36 PM, Dan Williams wrote:
>> On Tue, Apr 18, 2017 at 2:22 PM, Jason Gunthorpe
>> wrote:
>>> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
> I
On Tue, Apr 18, 2017 at 3:15 PM, Logan Gunthorpe wrote:
>
>
> On 18/04/17 03:36 PM, Dan Williams wrote:
>> On Tue, Apr 18, 2017 at 2:22 PM, Jason Gunthorpe
>> wrote:
>>> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
> I think this opens an even bigger can of worms..
On Tue, Apr 18, 2017 at 03:31:58PM -0600, Logan Gunthorpe wrote:
> 1) It means that sg_has_p2p has to walk the entire sg and check every
> page. Then map_sg_p2p/map_sg has to walk it again and repeat the check
> then do some operation per page. If anyone is concerned about the
> dma_map
On Tue, Apr 18, 2017 at 03:31:58PM -0600, Logan Gunthorpe wrote:
> 1) It means that sg_has_p2p has to walk the entire sg and check every
> page. Then map_sg_p2p/map_sg has to walk it again and repeat the check
> then do some operation per page. If anyone is concerned about the
> dma_map
On 18/04/17 03:36 PM, Dan Williams wrote:
> On Tue, Apr 18, 2017 at 2:22 PM, Jason Gunthorpe
> wrote:
>> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
I think this opens an even bigger can of worms..
>>>
>>> No, I don't think it does. You'd
On 18/04/17 03:36 PM, Dan Williams wrote:
> On Tue, Apr 18, 2017 at 2:22 PM, Jason Gunthorpe
> wrote:
>> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
I think this opens an even bigger can of worms..
>>>
>>> No, I don't think it does. You'd only shim when the target page is
On Tue, Apr 18, 2017 at 2:22 PM, Jason Gunthorpe
wrote:
> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
>> > I think this opens an even bigger can of worms..
>>
>> No, I don't think it does. You'd only shim when the target page is
>> backed by a
On Tue, Apr 18, 2017 at 2:22 PM, Jason Gunthorpe
wrote:
> On Tue, Apr 18, 2017 at 02:11:33PM -0700, Dan Williams wrote:
>> > I think this opens an even bigger can of worms..
>>
>> No, I don't think it does. You'd only shim when the target page is
>> backed by a device, not host memory, and you
On 18/04/17 03:03 PM, Jason Gunthorpe wrote:
> What about something more incremental like this instead:
> - dma_ops will set map_sg_p2p == map_sg when they are updated to
> support p2p, otherwise DMA on P2P pages will fail for those ops.
> - When all ops support p2p we remove the if and
On 18/04/17 03:03 PM, Jason Gunthorpe wrote:
> What about something more incremental like this instead:
> - dma_ops will set map_sg_p2p == map_sg when they are updated to
> support p2p, otherwise DMA on P2P pages will fail for those ops.
> - When all ops support p2p we remove the if and
1 - 100 of 210 matches
Mail list logo