This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely
>
> On Wed 09-05-18 14:04:21, Huaisheng HS1 Ye wrote:
> > > From: owner-linux...@kvack.org [mailto:owner-linux...@kvack.org] On
> > > Behalf Of
> Michal Hocko
> > >
> > > On Wed 09-05-18 04:22:10, Huaisheng HS1 Ye wrote:
> [...]
> > > > Current mm treats all memory regions equally, it divides
On Wed, 2018-05-09 at 16:26 -0700, Dan Williams wrote:
> On Wed, May 9, 2018 at 4:24 PM, Dave Jiang wrote:
> >
> >
> > On 05/09/2018 04:23 PM, Dan Williams wrote:
> > > On Wed, May 9, 2018 at 4:17 PM, Verma, Vishal L
> > > wrote:
> > > > On Fri,
On Wed, May 9, 2018 at 4:24 PM, Dave Jiang wrote:
>
>
> On 05/09/2018 04:23 PM, Dan Williams wrote:
>> On Wed, May 9, 2018 at 4:17 PM, Verma, Vishal L
>> wrote:
>>> On Fri, 2018-04-27 at 15:08 -0700, Dave Jiang wrote:
util_filter_walk() does
On 05/09/2018 04:23 PM, Dan Williams wrote:
> On Wed, May 9, 2018 at 4:17 PM, Verma, Vishal L
> wrote:
>> On Fri, 2018-04-27 at 15:08 -0700, Dave Jiang wrote:
>>> util_filter_walk() does the looping through of busses and regions.
>>> Removing
>>> duplicate code in
On Wed, May 9, 2018 at 4:17 PM, Verma, Vishal L
wrote:
> On Fri, 2018-04-27 at 15:08 -0700, Dave Jiang wrote:
>> util_filter_walk() does the looping through of busses and regions.
>> Removing
>> duplicate code in region ops and provide filter functions so we can
>>
On Fri, 2018-04-27 at 15:08 -0700, Dave Jiang wrote:
> util_filter_walk() does the looping through of busses and regions.
> Removing
> duplicate code in region ops and provide filter functions so we can
> utilize util_filter_walk() and share common code.
>
> Signed-off-by: Dave Jiang
On Wed, May 9, 2018 at 5:27 AM, Jan Kara wrote:
> On Tue 24-04-18 16:33:50, Dan Williams wrote:
>> xfs_break_dax_layouts(), similar to xfs_break_leased_layouts(), scans
>> for busy / pinned dax pages and waits for those pages to go idle before
>> any potential extent unmap
On 05/09/2018 03:47 PM, Verma, Vishal L wrote:
> On Fri, 2018-04-27 at 15:08 -0700, Dave Jiang wrote:
>> util_filter_walk() does the looping through of busses and regions.
>> Removing
>> duplicate code in namespace ops and provide filter functions so we can
>> utilize util_filter_walk() and
On Fri, 2018-04-27 at 15:08 -0700, Dave Jiang wrote:
> util_filter_walk() does the looping through of busses and regions.
> Removing
> duplicate code in namespace ops and provide filter functions so we can
> utilize util_filter_walk() and share common code.
>
> Signed-off-by: Dave Jiang
On Wed, May 9, 2018 at 3:56 AM, Jan Kara wrote:
> On Tue 24-04-18 16:33:35, Dan Williams wrote:
>> Background:
>>
>> get_user_pages() in the filesystem pins file backed memory pages for
>> access by devices performing dma. However, it only pins the memory pages
>> not the
On Wed 09-05-18 14:04:21, Huaisheng HS1 Ye wrote:
> > From: owner-linux...@kvack.org [mailto:owner-linux...@kvack.org] On Behalf
> > Of Michal Hocko
> >
> > On Wed 09-05-18 04:22:10, Huaisheng HS1 Ye wrote:
[...]
> > > Current mm treats all memory regions equally, it divides zones just by
> > >
On Wed, May 09, 2018 at 04:30:32PM +, Stephen Bates wrote:
> Hi Jerome
>
> > Now inside that page table you can point GPU virtual address
> > to use GPU memory or use system memory. Those system memory entry can
> > also be mark as ATS against a given PASID.
>
> Thanks. This all makes
On 09/05/18 07:40 AM, Christian König wrote:
> The key takeaway is that when any device has ATS enabled you can't
> disable ACS without breaking it (even if you unplug and replug it).
I don't follow how you came to this conclusion...
The ACS bits we'd be turning off are the ones that force
Hi Jerome
> Now inside that page table you can point GPU virtual address
> to use GPU memory or use system memory. Those system memory entry can
> also be mark as ATS against a given PASID.
Thanks. This all makes sense.
But do you have examples of this in a kernel driver (if so can you
On Wed, May 09, 2018 at 03:41:44PM +, Stephen Bates wrote:
> Christian
>
> >Interesting point, give me a moment to check that. That finally makes
> >all the hardware I have standing around here valuable :)
>
> Yes. At the very least it provides an initial standards based path
>
On Wed, May 9, 2018 at 3:47 AM, Stephen Rothwell wrote:
> On Wed, 9 May 2018 18:03:46 +0900 Mark Brown wrote:
>>
>> On Wed, May 09, 2018 at 10:47:57AM +0200, Daniel Vetter wrote:
>> > On Wed, May 9, 2018 at 10:44 AM, Mark Brown
On 05/09/2018 08:44 AM, Stephen Bates wrote:
Hi Don
RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
put NVME 'resources' into an assignable/manageable object for
'IOMMU-grouping',
which is really a 'DMA security domain' and less an 'IOMMU grouping
> -Original Message-
> From: Linux-nvdimm [mailto:linux-nvdimm-boun...@lists.01.org] On Behalf Of
> Aishwarya Pant
> Sent: Friday, February 23, 2018 6:55 AM
> To: Dan Williams ; Rafael J. Wysocki
> ; Len Brown ; linux-
>
On 05/08/2018 05:27 PM, Stephen Bates wrote:
Hi Don
Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
devices.
That agent should 'request' to the kernel that ACS be removed/circumvented
(p2p enabled) btwn two endpoints.
I recommend doing so via a sysfs
On 05/09/2018 10:44 AM, Alex Williamson wrote:
On Wed, 9 May 2018 12:35:56 +
"Stephen Bates" wrote:
Hi Alex and Don
Correct, the VM has no concept of the host's IOMMU groups, only
the hypervisor knows about the groups,
But as I understand it these groups are
On 05/08/2018 08:01 PM, Alex Williamson wrote:
On Tue, 8 May 2018 19:06:17 -0400
Don Dutile wrote:
On 05/08/2018 05:27 PM, Stephen Bates wrote:
As I understand it VMs need to know because VFIO passes IOMMU
grouping up into the VMs. So if a IOMMU grouping changes the VM's
Christian
>Interesting point, give me a moment to check that. That finally makes
>all the hardware I have standing around here valuable :)
Yes. At the very least it provides an initial standards based path for P2P DMAs
across RPs which is something we have discussed on this list in
On Wed, May 09, 2018 at 02:46:11PM +0200, Jan Kara wrote:
> On Thu 03-05-18 13:24:30, Ross Zwisler wrote:
> > Fix a race in the multi-order iteration code which causes the kernel to hit
> > a GP fault. This was first seen with a production v4.15 based kernel
> > (4.15.6-300.fc27.x86_64) utilizing
On Wed, 9 May 2018 12:35:56 +
"Stephen Bates" wrote:
> Hi Alex and Don
>
> >Correct, the VM has no concept of the host's IOMMU groups, only
> > the hypervisor knows about the groups,
>
> But as I understand it these groups are usually passed through to VMs
> on
On Tue, Apr 24, 2018 at 04:33:50PM -0700, Dan Williams wrote:
> xfs_break_dax_layouts(), similar to xfs_break_leased_layouts(), scans
> for busy / pinned dax pages and waits for those pages to go idle before
> any potential extent unmap operation.
>
> dax_layout_busy_page() handles synchronizing
> From: owner-linux...@kvack.org [mailto:owner-linux...@kvack.org] On Behalf Of
> Michal Hocko
>
> On Wed 09-05-18 04:22:10, Huaisheng HS1 Ye wrote:
> >
> > > On 05/07/2018 07:33 PM, Huaisheng HS1 Ye wrote:
> > > > diff --git a/mm/Kconfig b/mm/Kconfig
> > > > index c782e8f..5fe1f63 100644
> > >
Am 09.05.2018 um 15:12 schrieb Stephen Bates:
Jerome and Christian
I think there is confusion here, Alex properly explained the scheme
PCIE-device do a ATS request to the IOMMU which returns a valid
translation for a virtual address. Device can then use that address
directly without going
Jerome and Christian
> I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
> translation for a virtual address. Device can then use that address
> directly without going through IOMMU for translation.
So I went
On Thu 03-05-18 13:24:30, Ross Zwisler wrote:
> Fix a race in the multi-order iteration code which causes the kernel to hit
> a GP fault. This was first seen with a production v4.15 based kernel
> (4.15.6-300.fc27.x86_64) utilizing a DAX workload which used order 9 PMD
> DAX entries.
>
> The
Hi Don
>RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
>put NVME 'resources' into an assignable/manageable object for
> 'IOMMU-grouping',
>which is really a 'DMA security domain' and less an 'IOMMU grouping
> domain'.
Ha, I like your term "DMA Security
Hi Logan
>Yeah, I'm having a hard time coming up with an easy enough solution for
>the user. I agree with Dan though, the bus renumbering risk would be
>fairly low in the custom hardware seeing the switches are likely going
>to be directly soldered to the same board with the CPU.
Hi Alex and Don
>Correct, the VM has no concept of the host's IOMMU groups, only the
> hypervisor knows about the groups,
But as I understand it these groups are usually passed through to VMs on a
pre-group basis by the hypervisor? So IOMMU group 1 might be passed to VM A and
IOMMU
On Tue 24-04-18 16:33:50, Dan Williams wrote:
> xfs_break_dax_layouts(), similar to xfs_break_leased_layouts(), scans
> for busy / pinned dax pages and waits for those pages to go idle before
> any potential extent unmap operation.
>
> dax_layout_busy_page() handles synchronizing against new
On Wed 09-05-18 04:22:10, Huaisheng HS1 Ye wrote:
>
> > On 05/07/2018 07:33 PM, Huaisheng HS1 Ye wrote:
> > > diff --git a/mm/Kconfig b/mm/Kconfig
> > > index c782e8f..5fe1f63 100644
> > > --- a/mm/Kconfig
> > > +++ b/mm/Kconfig
> > > @@ -687,6 +687,22 @@ config ZONE_DEVICE
> > >
> > > +config
On Tue 24-04-18 16:33:35, Dan Williams wrote:
> Background:
>
> get_user_pages() in the filesystem pins file backed memory pages for
> access by devices performing dma. However, it only pins the memory pages
> not the page-to-file offset association. If a file is truncated the
> pages are mapped
On Tue 24-04-18 16:33:29, Dan Williams wrote:
> get_user_pages_fast() for device pages is missing the typical validation
> that all page references have been taken while the mapping was valid.
> Without this validation truncate operations can not reliably coordinate
> against new page reference
On Tue 24-04-18 16:33:07, Dan Williams wrote:
> In preparation for allowing filesystems to augment the dev_pagemap
> associated with a dax_device, add an ->fs_claim() callback. The
> ->fs_claim() callback is leveraged by the device-mapper dax
> implementation to iterate all member devices in the
On Tue 24-04-18 16:33:19, Dan Williams wrote:
> Currently, kernel/memremap.c contains generic code for supporting
> memremap() (CONFIG_HAS_IOMEM) and devm_memremap_pages()
> (CONFIG_ZONE_DEVICE). This causes ongoing build maintenance problems as
> additions to memremap.c, especially for the
The original message was received at Wed, 9 May 2018 15:37:59 +0800
from lists.01.org [142.59.83.70]
- The following addresses had permanent fatal errors -
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
40 matches
Mail list logo