Le 17/11/2020 à 16:55, Brice Goglin a écrit :
> Le 12/11/2020 à 11:49, Greg Kroah-Hartman a écrit :
>> On Thu, Nov 12, 2020 at 10:10:57AM +0100, Brice Goglin wrote:
>>> Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit :
>>>> On Thu, Nov 12, 2020 at 07:19:
Le 12/11/2020 à 11:49, Greg Kroah-Hartman a écrit :
> On Thu, Nov 12, 2020 at 10:10:57AM +0100, Brice Goglin wrote:
>> Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit :
>>> On Thu, Nov 12, 2020 at 07:19:48AM +0100, Brice Goglin wrote:
>>>>
>>>>
Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit :
> On Thu, Nov 12, 2020 at 07:19:48AM +0100, Brice Goglin wrote:
>> Le 07/10/2020 à 07:15, Greg Kroah-Hartman a écrit :
>>> On Tue, Oct 06, 2020 at 08:14:47PM -0700, Ricardo Neri wrote:
>>>> On Tue, Oct 06, 2020 a
Le 07/10/2020 à 07:15, Greg Kroah-Hartman a écrit :
> On Tue, Oct 06, 2020 at 08:14:47PM -0700, Ricardo Neri wrote:
>> On Tue, Oct 06, 2020 at 09:37:44AM +0200, Greg Kroah-Hartman wrote:
>>> On Mon, Oct 05, 2020 at 05:57:36PM -0700, Ricardo Neri wrote:
On Sat, Oct 03, 2020 at 10:53:45AM
Le 19/10/2020 à 16:16, Morten Rasmussen a écrit :
>
>>> If there is a provable benefit of having interconnect grouping
>>> information, I think it would be better represented by a distance matrix
>>> like we have for NUMA.
>> There have been some discussions in various forums about how to
>>
Le 19/10/2020 à 14:50, Peter Zijlstra a écrit :
> On Mon, Oct 19, 2020 at 01:32:26PM +0100, Jonathan Cameron wrote:
>> On Mon, 19 Oct 2020 12:35:22 +0200
>> Peter Zijlstra wrote:
>>> I'm confused by all of this. The core level is exactly what you seem to
>>> want.
>> It's the level above the
Le 16/10/2020 à 17:27, Jonathan Cameron a écrit :
> Both ACPI and DT provide the ability to describe additional layers of
> topology between that of individual cores and higher level constructs
> such as the level at which the last level cache is shared.
> In ACPI this can be represented in PPTT
IMM node).
Tested-by: Brice Goglin
> ---
> v1 -> v2:
>
> Fixed multi-level caches, and no caches. v1 incorrectly assumed only a level
> 1 always existed (Brice).
>
> drivers/acpi/hmat/hmat.c | 70
> +---
> 1 file cha
Le 12/04/2019 à 21:52, Len Brown a écrit :
I think I prefer 's/threads/cpus/g' on that. Threads makes me think SMT,
and I don't think there's any guarantee the part in question will have
SMT on.
>>> I think 'threads' is a bit confusing as well. We seem to be using 'cpu'
>>>
-and-tested-by: Brice Goglin
Just one minor typo below.
Le 09/04/2019 à 23:44, Keith Busch a écrit :
> Some types of memory nodes that HMAT describes may not be online at the
> time we initially parse their nodes' tables. If the node should be set
> to online later, as can happen when using PM
Le 25/03/2019 à 20:29, Dan Williams a écrit :
> Perhaps "path" might be a suitable replacement identifier rather than
> type. I.e. memory that originates from an ACPI.NFIT root device is
> likely "pmem".
Could work.
What kind of "path" would we get for other types of memory? (DDR,
Le 25/03/2019 à 17:56, Dan Williams a écrit :
>
> I'm generally against the concept that a "pmem" or "type" flag should
> indicate anything about the expected performance of the address range.
> The kernel should explicitly look to the HMAT for performance data and
> not otherwise make type-based
Le 23/03/2019 à 05:44, Yang Shi a écrit :
> With Dave Hansen's patches merged into Linus's tree
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c221c0b0308fd01d9fb33a16f64d2fd95f8830a4
>
> PMEM could be hot plugged as NUMA node now. But, how to use PMEM as NUMA
cellaneous typos, editorial clarifications, and whitespace fixups.
>
> Merged to most current linux-next.
>
> Added received review, test, and ack by's.
Tested-by: Brice Goglin
I tested this series with several manually-created HMATs.
I already have user-space support
Le 14/02/2019 à 18:10, Keith Busch a écrit :
> If the HMAT Subsystem Address Range provides a valid processor proximity
> domain for a memory domain, or a processor domain matches the performance
> access of the valid processor proximity domain, register the memory
> target with that initiator so
Le 19/02/2019 à 04:40, Len Brown a écrit :
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index ccd1f2a8e557..4250a87f57db 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -393,6 +393,7 @@ static bool match_smt(struct cpuinfo_x86 *c, struct
>
Le 14/02/2019 à 18:10, Keith Busch a écrit :
> System memory may have caches to help improve access speed to frequently
> requested address ranges. While the system provided cache is transparent
> to the software accessing these memory ranges, applications can optimize
> their own access based on
Le 14/02/2019 à 18:10, Keith Busch a écrit :
> System memory may have caches to help improve access speed to frequently
> requested address ranges. While the system provided cache is transparent
> to the software accessing these memory ranges, applications can optimize
> their own access based on
Le 21/02/2019 à 08:41, Len Brown a écrit :
>
> Here is my list of applications that care about the new CPUID leaf
> and the concepts of packages and die:
>
> cpuid
> lscpu
> x86_energy_perf_policy
> turbostat
You may add hwloc/lstopo which is used by most HPC runtimes (including
your
Le 19/02/2019 à 04:40, Len Brown a écrit :
> From: Len Brown
>
> like core_siblings, except it shows which die are in the same package.
>
> This is needed for lscpu(1) to correctly display die topology.
>
> Signed-off-by: Len Brown
> Cc: linux-...@vger.kernel.org
> Signed-off-by: Len Brown
>
Le 14/02/2019 à 18:10, Keith Busch a écrit :
> == Changes since v5 ==
>
> Updated HMAT parsing to account for the recently released ACPI 6.3
> changes.
>
> HMAT attribute calculation overflow checks.
>
> Fixed memory leak if HMAT parse fails.
>
> Minor change to the patch order. All
Le 13/02/2019 à 09:43, Brice Goglin a écrit :
> Le 13/02/2019 à 09:24, Dan Williams a écrit :
>> On Wed, Feb 13, 2019 at 12:12 AM Brice Goglin wrote:
>>> Le 13/02/2019 à 01:30, Dan Williams a écrit :
>>>> On Tue, Feb 12, 2019 at 11:59 AM Brice Goglin
>>&g
Le 13/02/2019 à 09:24, Dan Williams a écrit :
> On Wed, Feb 13, 2019 at 12:12 AM Brice Goglin wrote:
>> Le 13/02/2019 à 01:30, Dan Williams a écrit :
>>> On Tue, Feb 12, 2019 at 11:59 AM Brice Goglin wrote:
>>>> # ndctl disable-region all
>>>> # ndctl
Le 13/02/2019 à 01:30, Dan Williams a écrit :
> On Tue, Feb 12, 2019 at 11:59 AM Brice Goglin wrote:
>> # ndctl disable-region all
>> # ndctl zero-labels all
>> # ndctl enable-region region0
>> # ndctl create-namespace -r region0 -t pmem -m devdax
>> {
>
Alexander Duyck
> Reported-by: Brice Goglin
> Cc: Dave Hansen
> Signed-off-by: Dan Williams
> ---
> Changes since v1:
> * Fix the remove_id path since do_id_store() is shared with the new_id
> path (Brice)
>
> Brice, this works for me. I'll push it out on libnvdimm-pending, or
Le 11/02/2019 à 17:22, Dave Hansen a écrit :
> On 2/9/19 3:00 AM, Brice Goglin wrote:
>> I've used your patches on fake hardware (memmap=xx!yy) with an older
>> nvdimm-pending branch (without Keith's patches). It worked fine. This
>> time I am running on real Intel har
Le 11/02/2019 à 16:23, Keith Busch a écrit :
> On Sun, Feb 10, 2019 at 09:19:58AM -0800, Jonathan Cameron wrote:
>> On Sat, 9 Feb 2019 09:20:53 +0100
>> Brice Goglin wrote:
>>
>>> Hello Keith
>>>
>>> Could we ever have a single side cache in
Hello Keith
Could we ever have a single side cache in front of two NUMA nodes ? I
don't see a way to find that out in the current implementation. Would we
have an "id" and/or "nodemap" bitmask in the sidecache structure ?
Thanks
Brice
Le 16/01/2019 à 18:58, Keith Busch a écrit :
> System
Le 22/10/2018 à 22:13, Dave Hansen a écrit :
> Persistent memory is cool. But, currently, you have to rewrite
> your applications to use it. Wouldn't it be cool if you could
> just have it show up in your system like normal RAM and get to
> it like a slow blob of memory? Well... have I got the
Le 22/10/2018 à 22:13, Dave Hansen a écrit :
> Persistent memory is cool. But, currently, you have to rewrite
> your applications to use it. Wouldn't it be cool if you could
> just have it show up in your system like normal RAM and get to
> it like a slow blob of memory? Well... have I got the
Le 13/09/2018 à 11:35, Sudeep Holla a écrit :
> On Thu, Sep 13, 2018 at 10:39:10AM +0100, James Morse wrote:
>> Hi Brice,
>>
>> On 13/09/18 06:51, Brice Goglin wrote:
>>> Le 12/09/2018 à 11:49, Sudeep Holla a écrit :
>>>>> Yes. Without this change,
Le 13/09/2018 à 11:35, Sudeep Holla a écrit :
> On Thu, Sep 13, 2018 at 10:39:10AM +0100, James Morse wrote:
>> Hi Brice,
>>
>> On 13/09/18 06:51, Brice Goglin wrote:
>>> Le 12/09/2018 à 11:49, Sudeep Holla a écrit :
>>>>> Yes. Without this change,
Le 12/09/2018 à 11:49, Sudeep Holla a écrit :
>
>> Yes. Without this change, we hit the lscpu error in the commit message,
>> and get zero output about the system. We don't even get information
>> about the caches which are architecturally specified or how many cpus
>> are present. With this
Le 12/09/2018 à 11:49, Sudeep Holla a écrit :
>
>> Yes. Without this change, we hit the lscpu error in the commit message,
>> and get zero output about the system. We don't even get information
>> about the caches which are architecturally specified or how many cpus
>> are present. With this
> Is there a good reason for diverging instead of adjusting the
> core_sibling mask? On x86 the core_siblings mask is defined by the last
> level cache span so they don't have this issue.
No. core_siblings is defined as the list of cores that have the same
physical_package_id (see the doc of
> Is there a good reason for diverging instead of adjusting the
> core_sibling mask? On x86 the core_siblings mask is defined by the last
> level cache span so they don't have this issue.
No. core_siblings is defined as the list of cores that have the same
physical_package_id (see the doc of
Le 30/12/2017 à 07:58, Matthew Wilcox a écrit :
> On Wed, Dec 27, 2017 at 10:10:34AM +0100, Brice Goglin wrote:
>>> Perhaps we can enlist /proc/iomem or a similar enumeration interface
>>> to tell userspace the NUMA node and whether the kernel thinks it has
>>&g
Le 30/12/2017 à 07:58, Matthew Wilcox a écrit :
> On Wed, Dec 27, 2017 at 10:10:34AM +0100, Brice Goglin wrote:
>>> Perhaps we can enlist /proc/iomem or a similar enumeration interface
>>> to tell userspace the NUMA node and whether the kernel thinks it has
>>&g
Le 22/12/2017 à 23:53, Dan Williams a écrit :
> On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.gog...@gmail.com> wrote:
>> Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> [..]
>> Hello
>>
>> I can confirm that HPC runtimes are going to use these patche
Le 22/12/2017 à 23:53, Dan Williams a écrit :
> On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin wrote:
>> Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> [..]
>> Hello
>>
>> I can confirm that HPC runtimes are going to use these patches (at least
>> all
Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> On Wed, Dec 20, 2017 at 02:29:56PM -0800, Dan Williams wrote:
>> On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
>> wrote:
>>> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
On Wed, Dec 20, 2017 at
Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> On Wed, Dec 20, 2017 at 02:29:56PM -0800, Dan Williams wrote:
>> On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
>> wrote:
>>> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen
Le 27/06/2017 16:21, Thomas Gleixner a écrit :
> On Tue, 27 Jun 2017, Suravee Suthikulpanit wrote:
>> On 6/27/17 17:48, Borislav Petkov wrote:
>>> On Tue, Jun 27, 2017 at 01:40:52AM -0500, Suravee Suthikulpanit wrote:
However, this is not the case on AMD family17h multi-die processor
Le 27/06/2017 16:21, Thomas Gleixner a écrit :
> On Tue, 27 Jun 2017, Suravee Suthikulpanit wrote:
>> On 6/27/17 17:48, Borislav Petkov wrote:
>>> On Tue, Jun 27, 2017 at 01:40:52AM -0500, Suravee Suthikulpanit wrote:
However, this is not the case on AMD family17h multi-die processor
Le 09/06/2017 15:28, Thomas Renninger a écrit :
> On Thursday, June 08, 2017 08:24:01 PM Greg KH wrote:
>> On Thu, Jun 08, 2017 at 06:56:14PM +0200, Felix Schnizlein wrote:
>>> ---
>>> arch/x86/kernel/Makefile| 1 +
>>> arch/x86/kernel/cpuinfo_sysfs.c | 166
>
Le 09/06/2017 15:28, Thomas Renninger a écrit :
> On Thursday, June 08, 2017 08:24:01 PM Greg KH wrote:
>> On Thu, Jun 08, 2017 at 06:56:14PM +0200, Felix Schnizlein wrote:
>>> ---
>>> arch/x86/kernel/Makefile| 1 +
>>> arch/x86/kernel/cpuinfo_sysfs.c | 166
>
Le 29/11/2016 22:02, Brice Goglin a écrit :
> Le 29/11/2016 20:39, Borislav Petkov a écrit :
>> Does that fix it?
>>
>> Patch is against latest tip/master because we have some more changes in
>> that area.
> I tested the second patch on top of 4.8.11, it
Le 29/11/2016 22:02, Brice Goglin a écrit :
> Le 29/11/2016 20:39, Borislav Petkov a écrit :
>> Does that fix it?
>>
>> Patch is against latest tip/master because we have some more changes in
>> that area.
> I tested the second patch on top of 4.8.11, it
Le 30 novembre 2016 00:28:08 GMT+01:00, Gavin Shan <gws...@linux.vnet.ibm.com>
a écrit :
>On Tue, Nov 29, 2016 at 07:57:51AM +0100, Brice Goglin wrote:
>>Hello
>>
>>My Dell PowerEdge R815 doesn't have IPMI anymore when I boot a 4.8
>>kernel, the BMC doesn'
Le 30 novembre 2016 00:28:08 GMT+01:00, Gavin Shan
a écrit :
>On Tue, Nov 29, 2016 at 07:57:51AM +0100, Brice Goglin wrote:
>>Hello
>>
>>My Dell PowerEdge R815 doesn't have IPMI anymore when I boot a 4.8
>>kernel, the BMC doesn't even ping anymore. Its Etherne
Le 29/11/2016 20:39, Borislav Petkov a écrit :
> Does that fix it?
>
> Patch is against latest tip/master because we have some more changes in
> that area.
I tested the second patch on top of 4.8.11, it brings core_id back to
where it was before 4.6, thanks.
Reported-and-tested-by:
Le 29/11/2016 20:39, Borislav Petkov a écrit :
> Does that fix it?
>
> Patch is against latest tip/master because we have some more changes in
> that area.
I tested the second patch on top of 4.8.11, it brings core_id back to
where it was before 4.6, thanks.
Reported-and-tested-by:
Hello
Since Linux 4.6 (and still in 4.9-rc5 at least), both AMD Bulldozer
cores of a single dual-core compute unit report the same core_id:
$ cat /sys/devices/system/cpu/cpu{?,??}/topology/core_id
0
0
1
1
2
2
3
0
3
[...]
Before 4.5 (and for a very long time), the kernel reported different
Hello
Since Linux 4.6 (and still in 4.9-rc5 at least), both AMD Bulldozer
cores of a single dual-core compute unit report the same core_id:
$ cat /sys/devices/system/cpu/cpu{?,??}/topology/core_id
0
0
1
1
2
2
3
0
3
[...]
Before 4.5 (and for a very long time), the kernel reported different
Le 17/08/2015 15:54, Theodore Ts'o a écrit :
>
> It's cast in stone. There are too many places all over the kernel,
> especially in a huge number of file systems, which assume that the
> sector size is 512 bytes. So above the block layer, the sector size
> is always going to be 512.
Could this
Le 17/08/2015 15:54, Theodore Ts'o a écrit :
It's cast in stone. There are too many places all over the kernel,
especially in a huge number of file systems, which assume that the
sector size is 512 bytes. So above the block layer, the sector size
is always going to be 512.
Could this be a
Do you have local_cpus and local_cpulist attributes as well?
User-space tools such as hwloc use those for binding near I/O devices,
although I guess we could have some CPU-less NVDIMM NUMA nodes?
Brice
Le 19/06/2015 20:18, Toshi Kani a écrit :
> Add support of sysfs 'numa_node' to I/O-related
Do you have local_cpus and local_cpulist attributes as well?
User-space tools such as hwloc use those for binding near I/O devices,
although I guess we could have some CPU-less NVDIMM NUMA nodes?
Brice
Le 19/06/2015 20:18, Toshi Kani a écrit :
Add support of sysfs 'numa_node' to I/O-related
Le 07/04/2015 21:41, Peter Zijlstra a écrit :
> No, that's very much not the same. Even if it were dealing with hotplug
> it would still assume the cpu to return to the same node.
>
> But mostly people do not even bother to handle hotplug.
>
You said userspace assumes the cpu<->node relation is a
Le 07/04/2015 21:41, Peter Zijlstra a écrit :
No, that's very much not the same. Even if it were dealing with hotplug
it would still assume the cpu to return to the same node.
But mostly people do not even bother to handle hotplug.
You said userspace assumes the cpu-node relation is a
Le 18/09/2014 21:33, Dave Hansen a écrit :
> After this set, there are only 2 sets of core siblings, which
> is what we expect for a 2-socket system.
>
> # cat cpu*/topology/physical_package_id | sort | uniq -c
> 18 0
> 18 1
> # cat cpu*/topology/core_siblings_list | sort | uniq -c
>
Le 18/09/2014 21:33, Dave Hansen a écrit :
After this set, there are only 2 sets of core siblings, which
is what we expect for a 2-socket system.
# cat cpu*/topology/physical_package_id | sort | uniq -c
18 0
18 1
# cat cpu*/topology/core_siblings_list | sort | uniq -c
18
Le 16/09/2014 05:29, Peter Zijlstra a écrit :
>
>> This also fixes sysfs because CPUs with the same 'physical_package_id'
>> in /sys/devices/system/cpu/cpu*/topology/ are not listed together
>> in the same 'core_siblings_list'. This violates a statement from
>>
Le 16/09/2014 05:29, Peter Zijlstra a écrit :
This also fixes sysfs because CPUs with the same 'physical_package_id'
in /sys/devices/system/cpu/cpu*/topology/ are not listed together
in the same 'core_siblings_list'. This violates a statement from
Le 03/10/2013 12:46, Stephan von Krawczynski a écrit :
> Ok, let me re-phrase the question a bit.
> Is it really possible what you see here:
>
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 45
> model name : Intel(R) Xeon(R) CPU E5-2660 0 @
Le 03/10/2013 12:46, Stephan von Krawczynski a écrit :
Ok, let me re-phrase the question a bit.
Is it really possible what you see here:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
goes
to processor X and to its hyperthread sibling).
Signed-off-by: Brice Goglin
---
drivers/dma/dmaengine.c | 64 +++-
1 file changed, 37 insertions(+), 27 deletions(-)
Index: linux-3.11-rc3/drivers/dma/dmaengine.c
===
used,
so this won't hurt.
On the above SuperMicro machine, channels are still allocated the same.
On the Dells, there are no locality issue anymore (MEMCPY channel X goes
to processor X and to its hyperthread sibling).
Signed-off-by: Brice Goglin
---
drivers/dma/dmaengine.c | 64 +
.
On the Dells, there are no locality issue anymore (MEMCPY channel X goes
to processor X and to its hyperthread sibling).
Signed-off-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/dmaengine.c | 64 +++-
1 file changed, 37 insertions(+), 27
-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/dmaengine.c | 64 +++-
1 file changed, 37 insertions(+), 27 deletions(-)
Index: linux-3.11-rc3/drivers/dma/dmaengine.c
===
--- linux-3.11-rc3
ups such as the
64-byte alignement restriction on legacy DMA operations (introduced in
commit f26df1a1 as a workaround for silicon errata).
Signed-off-by: Brice Goglin
---
drivers/dma/ioat/dma_v3.c | 24 +---
1 file changed, 1 insertion(+), 23 deletions(-)
Index: b/drivers
ations (introduced in commit f26df1a1
as a workaround for silicon errata).
Signed-off-by: Brice Goglin
---
drivers/dma/ioat/dma_v3.c |5 +
1 file changed, 1 insertion(+), 4 deletions(-)
Index: b/drivers/dma/ioat/dma_v3.c
===
s now disabled by default on buggy 3.2 platforms.
Passing ioat_raid_enabled=1 force-enables it on all platforms
(previous behavior).
Passing ioat_raid_enabled=0 force-disables it everywhere.
When RAID offload is disabled, legacy operations (memcpy, etc.)
can work again without alignment restrict
in commit f26df1a1
as a workaround for silicon errata).
Signed-off-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/ioat/dma_v3.c |5 +
1 file changed, 1 insertion(+), 4 deletions(-)
Index: b/drivers/dma/ioat/dma_v3.c
alignement restriction on legacy DMA operations (introduced in
commit f26df1a1 as a workaround for silicon errata).
Signed-off-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/ioat/dma_v3.c | 24 +---
1 file changed, 1 insertion(+), 23 deletions(-)
Index: b/drivers/dma
force-enables it on all platforms
(previous behavior).
Passing ioat_raid_enabled=0 force-disables it everywhere.
When RAID offload is disabled, legacy operations (memcpy, etc.)
can work again without alignment restrictions.
Signed-off-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/ioat
operations (memcpy, etc.) can
work without alignment restrictions anymore.
Signed-off-by: Brice Goglin
---
drivers/dma/ioat/dma_v3.c |9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
Index: b/drivers/dma/ioat/dma_v3.c
annels are still allocated the same.
On the Dells, there are no locality issue anymore (each MEMCPY channel
goes to both hyperthreads of a single core of the local socket).
Signed-off-by: Brice Goglin
---
drivers/dma/dmaengine.c | 64 +++-
1 file chang
are still allocated the same.
On the Dells, there are no locality issue anymore (each MEMCPY channel
goes to both hyperthreads of a single core of the local socket).
Signed-off-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/dmaengine.c | 64 +++-
1
operations (memcpy, etc.) can
work without alignment restrictions anymore.
Signed-off-by: Brice Goglin brice.gog...@inria.fr
---
drivers/dma/ioat/dma_v3.c |9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
Index: b/drivers/dma/ioat/dma_v3.c
Le 21/06/2013 07:00, H. Peter Anvin a écrit :
> An awful lot of drivers, mostly DRI drivers, are still mucking with
> MTRRs directly as opposed to using ioremap_wc() or similar interfaces.
> In addition to the architecture dependency, this is really undesirable
> because MTRRs are a limited
Le 21/06/2013 07:00, H. Peter Anvin a écrit :
An awful lot of drivers, mostly DRI drivers, are still mucking with
MTRRs directly as opposed to using ioremap_wc() or similar interfaces.
In addition to the architecture dependency, this is really undesirable
because MTRRs are a limited resource,
Le 02/10/2012 17:19, Kirill A. Shutemov a écrit :
> From: "Kirill A. Shutemov"
>
> On right access to huge zero page we alloc a new page and clear it.
>
s/right/write/ ?
Brice
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Le 02/10/2012 17:19, Kirill A. Shutemov a écrit :
From: Kirill A. Shutemov kirill.shute...@linux.intel.com
On right access to huge zero page we alloc a new page and clear it.
s/right/write/ ?
Brice
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
Andrew Morton wrote:
> What is the status of getting infiniband to use this facility?
>
> How important is this feature to KVM?
>
> To xpmem?
>
> Which other potential clients have been identified and how important it it
> to those?
>
As I said when Andrea posted the first patch series, I used
Andrew Morton wrote:
What is the status of getting infiniband to use this facility?
How important is this feature to KVM?
To xpmem?
Which other potential clients have been identified and how important it it
to those?
As I said when Andrea posted the first patch series, I used
[I/OAT]: Remove duplicate assignation in dma_skb_copy_datagram_iovec
No need to compute copy twice in the frags loop in
dma_skb_copy_datagram_iovec().
Signed-off-by: Brice Goglin <[EMAIL PROTECTED]>
---
user_dma.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ne
Yinghai Lu wrote:
>>> Have a look at the above link. I don't get -1. I get 0 everywhere, while
>>> I should get 1 for some devices. And if I unplug/replug a device using
>>> fakephp, numa_node becomes correct (1 instead of 0). This just looks
>>> like the code is there but things are initialized
Yinghai Lu wrote:
Have a look at the above link. I don't get -1. I get 0 everywhere, while
I should get 1 for some devices. And if I unplug/replug a device using
fakephp, numa_node becomes correct (1 instead of 0). This just looks
like the code is there but things are initialized in the wrong
[I/OAT]: Remove duplicate assignation in dma_skb_copy_datagram_iovec
No need to compute copy twice in the frags loop in
dma_skb_copy_datagram_iovec().
Signed-off-by: Brice Goglin [EMAIL PROTECTED]
---
user_dma.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core
Linus Torvalds wrote:
> - Lots of cleanups from the x86 merge (making more and more use of common
>files), but also the big page attribute stuff is in and caused a fair
>amount of churn, and while most of the issues should have been very
>obvious and all got fixed, this is
Linus Torvalds wrote:
- Lots of cleanups from the x86 merge (making more and more use of common
files), but also the big page attribute stuff is in and caused a fair
amount of churn, and while most of the issues should have been very
obvious and all got fixed, this is definitely
Yinghai Lu wrote:
> On Jan 31, 2008 5:42 AM, Brice Goglin <[EMAIL PROTECTED]> wrote:
>
>> It works fine on regular machines such as dual opterons. However, I
>> noticed recently that it was wrong on some quad-opteron machines (see
>> http://marc.info/?l=linux-
Paul Mundt wrote:
On Wed, Jan 30, 2008 at 07:48:13PM -0500, Chris Snook wrote:
While pondering ways to optimize I/O and swapping on large NUMA machines, I
noticed that the numa_node field in struct device isn't actually used
anywhere. We just have a couple dozen lines of code to
Paul Mundt wrote:
On Wed, Jan 30, 2008 at 07:48:13PM -0500, Chris Snook wrote:
While pondering ways to optimize I/O and swapping on large NUMA machines, I
noticed that the numa_node field in struct device isn't actually used
anywhere. We just have a couple dozen lines of code to
Yinghai Lu wrote:
On Jan 31, 2008 5:42 AM, Brice Goglin [EMAIL PROTECTED] wrote:
It works fine on regular machines such as dual opterons. However, I
noticed recently that it was wrong on some quad-opteron machines (see
http://marc.info/?l=linux-pcim=11907248538w=2) because something
Shane Huang wrote:
This patch recover Tejun's commit
4be8f906435a6af241821ab5b94b2b12cb7d57d8
because there is one MSI bug on RS690+SB600 board which will lead to
boot failure. This bug is NOT same as the one in SB700 SATA controller,
quirk_msi_intx_disable_bug does not work to SB600.
Shane Huang wrote:
This patch recover Tejun's commit
4be8f906435a6af241821ab5b94b2b12cb7d57d8
because there is one MSI bug on RS690+SB600 board which will lead to
boot failure. This bug is NOT same as the one in SB700 SATA controller,
quirk_msi_intx_disable_bug does not work to SB600.
Andrea Arcangeli wrote:
This patch is last version of a basic implementation of the mmu
notifiers.
In short when the linux VM decides to free a page, it will unmap it
from the linux pagetables. However when a page is mapped not just by
the regular linux ptes, but also from the shadow
Andrea Arcangeli wrote:
This patch is last version of a basic implementation of the mmu
notifiers.
In short when the linux VM decides to free a page, it will unmap it
from the linux pagetables. However when a page is mapped not just by
the regular linux ptes, but also from the shadow
1 - 100 of 235 matches
Mail list logo