> This instructs the kernel to include the MS_NOEXEC and MS_NOSUID mount
> flags when mounting devtmpfs.
So does a mount syscall
> In-kernel separation of executable and non-executable code combined
> with a proper executability policy is a basic technique to
On Wed, Nov 21, 2012 at 10:21:43AM +, Mel Gorman wrote:
> Note: This is very heavily based on a patch from Peter Zijlstra with
> fixes from Ingo Molnar, Hugh Dickins and Johannes Weiner. That patch
> put a lot of migration logic into mm/huge_memory.c where it does
> not
David Howells wrote:
> There have the following warning message when running modules install for
> sign ko files:
>
> # make modules_install
> ...
> INSTALL drivers/input/touchscreen/pcap_ts.ko
> Found = in conditional, should be == at scripts/sign-file line 164.
> Found = in conditional,
On Tue, Nov 20, 2012 at 05:52:39PM +0100, Ingo Molnar wrote:
>
> * Rik van Riel wrote:
>
> > Performance measurements will show us how much of an impact it
> > makes, since I don't think we have never done apples to apples
> > comparisons with just this thing toggled :)
>
> I've done a
On 2012-11-21 13:40, Thierry Reding wrote:
> On Wed, Nov 21, 2012 at 01:06:03PM +0200, Tomi Valkeinen wrote:
(sorry for bouncing back and forth with my private and my @ti addresses.
I can't find an option in thunderbird to only use one sender address,
and I always forget to change it when
On Wed, Nov 21, 2012 at 4:26 PM, James Bottomley
wrote:
> On Wed, 2012-11-21 at 16:02 +0530, vinayak holikatti wrote:
>> On Wed, Nov 14, 2012 at 2:56 AM, James Bottomley
>> wrote:
>> > On Thu, 2012-10-18 at 17:37 +0530, vinayak holikatti wrote:
>> >> I am Vacation will look into it when i am
Added linux-mm@ to cc:. The patch can stand on it's own.
> Make balance_dirty_pages start the throttling when the WRITEBACK_TEMP
> counter is high enough. This prevents us from having too many dirty
> pages on fuse, thus giving the userspace part of it a chance to write
> stuff properly.
>
>
On Wed, Nov 21, 2012 at 12:24:55PM +0800, Asias He wrote:
> On 11/20/2012 09:37 PM, Michael S. Tsirkin wrote:
> > On Tue, Nov 20, 2012 at 02:39:40PM +0800, Asias He wrote:
> >> On 11/20/2012 04:26 AM, Michael S. Tsirkin wrote:
> >>> On Mon, Nov 19, 2012 at 04:53:42PM +0800, Asias He wrote:
>
On Tue, Nov 20, 2012 at 11:39 AM, Henrik Rydberg wrote:
>> drm/i915: do not ignore eDP bpc settings from vbt
>
> As advertised, this patch breaks the Macbook Pro Retina, which seems
> unfair. The patch below is certainly not the best remedy, but it does
> work. Tested on a MacbookPro10,1.
Hi,
>
> Memory notifications are quite irrelevant to partitioning and cgroups. The
> use-case is related to user-space handling low memory. Meaning the
> functionality should be accurate with specific granularity (e.g. 1 MB) and
> time (0.25s is OK) but better to have it as simple and
On Tue, Nov 20, 2012 at 07:54:13PM -0600, Andrew Theurer wrote:
> On Tue, 2012-11-20 at 18:56 +0100, Ingo Molnar wrote:
> > * Ingo Molnar wrote:
> >
> > > ( The 4x JVM regression is still an open bug I think - I'll
> > > re-check and fix that one next, no need to re-report it,
> > > I'm on
On Tue, Nov 20, 2012 at 01:31:56PM +0100, Ingo Molnar wrote:
>
> * Ingo Molnar wrote:
>
> > * Ingo Molnar wrote:
> >
> > > numa/core profile:
> > >
> > > 95.66% perf-1201.map [.] 0x7fe4ad1c8fc7
> > > 1.70% libjvm.so [.] 0x00381581
>From dd622978066d2cf29a26f246ad6c55f51a0a6272 Mon Sep 17 00:00:00 2001
From: Liu Jinsong
Date: Wed, 21 Nov 2012 15:39:47 +0800
Subject: [PATCH 2/2] Xen acpi memory hotplug hypercall
This patch implement Xen acpi memory hotplug hypercall, extracting
memory information then hypercall to
>From 630c65690c878255ce71e7c1172338ed08709273 Mon Sep 17 00:00:00 2001
From: Liu Jinsong
Date: Tue, 20 Nov 2012 21:14:37 +0800
Subject: [PATCH 1/2] Xen acpi memory hotplug driver
Xen acpi memory hotplug consists of 2 logic components:
Xen acpi memory hotplug driver and Xen hypercall.
This
On Wed, 2012-11-21 at 22:08 +1100, Nathan Williams wrote:
> On Wed, 2012-11-21 at 09:49 +0200, Artem Bityutskiy wrote:
> > You probably use 3.5? There was a bug which was fixed, try the latest
> > stable 3.5 version, the fix must be there.
> >
>
> No I'm using 3.6. Do you know what the patch
Since the clk framework has already taken necessary locks before calling
into the arch clk ops code, no further locks are needed while setting
the parent of dsib clk. This patch removes a comment that indicated
otherwise, and yet did not take any locks.
Signed-off-by: Sivaram Nair
---
On Wed, Nov 21, 2012 at 01:06:03PM +0200, Tomi Valkeinen wrote:
> On 2012-11-21 06:23, Alex Courbot wrote:
> > Hi Grant,
> >
> > On Wednesday 21 November 2012 05:54:29 Grant Likely wrote:
> >>> With the advent of the device tree and of ARM kernels that are not
> >>> board-tied, we cannot rely on
On Tue, Nov 20, 2012 at 05:09:18PM +0100, Ingo Molnar wrote:
>
> Ok, the patch withstood a bit more testing as well. Below is a
> v2 version of it, with a couple of cleanups (no functional
> changes).
>
> Thanks,
>
> Ingo
>
> ->
> Subject: mm, numa: Turn 4K pte NUMA
There have the following warning message when running modules install for sign
ko files:
# make modules_install
...
INSTALL drivers/input/touchscreen/pcap_ts.ko
Found = in conditional, should be == at scripts/sign-file line 164.
Found = in conditional, should be == at scripts/sign-file line
On 11/21/2012 08:03 AM, Olof Johansson :
> Hi,
>
>
> On Tue, Nov 20, 2012 at 09:59:27AM +0100, Nicolas Ferre wrote:
>> Arnd, Olof,
>>
>> Just for the record, I do not want to put pressure at a such late time in
>> the 3.7-rc process. So, I just reworked that pull-request because the
>> previous
On Wed, Nov 21, 2012 at 10:21:43AM +, Mel Gorman wrote:
> Note: This is very heavily based on a patch from Peter Zijlstra with
> fixes from Ingo Molnar, Hugh Dickins and Johannes Weiner. That patch
> put a lot of migration logic into mm/huge_memory.c where it does
> not
Hi Mauro,
Can you please pull the following patches for vpif, which fixes
some trivial issues.
Thanks and Regards,
--Prabhakar Lad
The following changes since commit 2c4e11b7c15af70580625657a154ea7ea70b8c76:
[media] siano: fix RC compilation (2012-11-07 11:09:08 +0100)
are available in the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The following changes since commit f4a75d2eb7b1e2206094b901be09adb31ba63681:
Linux 3.7-rc6 (2012-11-16 17:42:40 -0800)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/mzx/extcon.git for-gregkh
Alexey
On Mon, Nov 19, 2012 at 11:37:01PM -0800, David Rientjes wrote:
> On Tue, 20 Nov 2012, Ingo Molnar wrote:
>
> > No doubt numa/core should not regress with THP off or on and
> > I'll fix that.
> >
> > As a background, here's how SPECjbb gets slower on mainline
> > (v3.7-rc6) if you boot Mel's
On Wed, 2012-11-21 at 09:49 +0200, Artem Bityutskiy wrote:
> You probably use 3.5? There was a bug which was fixed, try the latest
> stable 3.5 version, the fix must be there.
>
No I'm using 3.6. Do you know what the patch was so I can look it up?
--
To unsubscribe from this list: send the
Please pull.
The following changes since commit 99b6e1e7233073a23a20824db8c5260a723ed192:
Linus Torvalds (1):
Merge branch 'merge' of git://git.kernel.org/.../benh/powerpc
are available in the git repository at:
* David Rientjes wrote:
> Over the past 24 hours, however, throughput has significantly
> improved from a 6.3% regression to a 3.0% regression [...]
It's still a regression though, and I'd like to figure out the
root cause of that. An updated full profile from tip:master
[which has all the
On 2012-11-21 06:23, Alex Courbot wrote:
> Hi Grant,
>
> On Wednesday 21 November 2012 05:54:29 Grant Likely wrote:
>>> With the advent of the device tree and of ARM kernels that are not
>>> board-tied, we cannot rely on these board-specific hooks anymore but
>>
>> This isn't strictly true. It is
On Mon, Nov 19, 2012 at 07:41:16PM -0500, Rik van Riel wrote:
> On 11/19/2012 06:00 PM, Mel Gorman wrote:
> >On Mon, Nov 19, 2012 at 11:36:04PM +0100, Ingo Molnar wrote:
> >>
> >>* Mel Gorman wrote:
> >>
> >>>Ok.
> >>>
> >>>In response to one of your later questions, I found that I had
> >>>in
On Tue, Nov 20, 2012 at 06:00:41PM -0800, Darrick J. Wong wrote:
> Create a helper function to check if a backing device requires stable page
> writes and, if so, performs the necessary wait. Then, make it so that all
> points in the memory manager that handle making pages writable use the helper
On Wed, 2012-11-21 at 16:02 +0530, vinayak holikatti wrote:
> On Wed, Nov 14, 2012 at 2:56 AM, James Bottomley
> wrote:
> > On Thu, 2012-10-18 at 17:37 +0530, vinayak holikatti wrote:
> >> I am Vacation will look into it when i am back to work.
> >> > This doesn't apply on 3.7-rc1. Am I missing
> +static inline void bdi_require_stable_pages(struct backing_dev_info *bdi)
> +{
> + bdi->capabilities |= BDI_CAP_STABLE_WRITES;
> +}
> +
> +static inline void bdi_unrequire_stable_pages(struct backing_dev_info *bdi)
> +{
> + bdi->capabilities &= ~BDI_CAP_STABLE_WRITES;
> +}
Any reason
On Tue, Nov 20, 2012 at 08:40:39AM -0800, ebied...@xmission.com wrote:
> Daniel Kiper writes:
>
> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> > functions or require some changes in behavior of kexec/kdump generic code.
> > To cope with that problem kexec_ops struct
On Wed, Nov 21, 2012 at 02:54:35AM -0500, Christoph Hellwig wrote:
> On Tue, Nov 20, 2012 at 06:00:34PM -0800, Darrick J. Wong wrote:
> > This creates a per-backing-device counter that tracks the number of users
> > which
> > require pages to be held immutable during writeout. Eventually it will
On Tue, 20 Nov 2012, Kees Cook wrote:
> Hi James,
>
> Please pull these Yama changes for 3.8. Thanks!
>
Pulled, thanks.
--
James Morris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Thomas,
On Wed, Nov 21, 2012 at 7:00 AM, Thomas Petazzoni
wrote:
> Dear Ezequiel Garcia,
>
> On Tue, 20 Nov 2012 19:39:38 -0300, Ezequiel Garcia wrote:
>
>> * Read/write support
>>
>> Yes, this implementation supports read/write access.
>
> While I think the original ubiblk that was read-only
Hugh Dickins writes:
> On Thu, 8 Nov 2012, Robert Jarzmik wrote:
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -467,6 +471,7 @@ int add_to_page_cache_locked(struct page *page, struct
>> address_space *mapping,
>> } else {
>> page->mapping = NULL;
>>
Dave Chinner writes:
> We actually have an informal convention for formating filesystem
> trace events, and that is to use the device number
>
>>
>> > + ),
>> > +
>> > + TP_printk("page=%p pfn=%lu blk=%d:%d inode+ofs=%lu+%lu",
>
> ... and to prefix messages like:
>
> TP_printk("dev
On Wed, Nov 21, 2012 at 10:26:35AM +0800, Zhang Yanfei wrote:
> The notifier will be registered in crash_notifier_list when loading
> kvm-intel module. And the bitmap indicates whether we should do
> VMCLEAR operation in kdump. The bits in the bitmap are set/unset
> according to different
Andrew Morton writes:
>> +TP_STRUCT__entry(
>> +__field(struct page *, page)
>> +__field(unsigned long, i_no)
>
> May as well call this i_ino - there's little benefit in using a
> different identifier.
Agreed for patch V2.
>
>> +__field(unsigned long,
On Wed, Nov 21, 2012 at 10:23:12AM +0800, Zhang Yanfei wrote:
> This patch adds an atomic notifier list named crash_notifier_list.
> When loading kvm-intel module, a notifier will be registered in
> the list to enable vmcss loaded on all cpus to be VMCLEAR'd if
> needed.
>
> Signed-off-by: Zhang
From: Rik van Riel
Intel has an architectural guarantee that the TLB entry causing
a page fault gets invalidated automatically. This means
we should be able to drop the local TLB invalidation.
Because of the way other areas of the page fault code work,
chances are good that all x86 CPUs do
On Wed, Nov 14, 2012 at 2:56 AM, James Bottomley
wrote:
> On Thu, 2012-10-18 at 17:37 +0530, vinayak holikatti wrote:
>> I am Vacation will look into it when i am back to work.
>> > This doesn't apply on 3.7-rc1. Am I missing any patches in between ?
>
> OK, so it doesn't apply for me either:
>
>
From: Rik van Riel
The function ptep_set_access_flags is only ever used to upgrade
access permissions to a page. That means the only negative side
effect of not flushing remote TLBs is that other CPUs may incur
spurious page faults, if they happen to access the same address,
and still have a PTE
From: Rik van Riel
We need pte_present to return true for _PAGE_PROTNONE pages, to indicate that
the pte is associated with a page.
However, for TLB flushing purposes, we would like to know whether the pte
points to an actually accessible page. This allows us to skip remote TLB
flushes for
From: Ingo Molnar
Reuse the NUMA code's 'modified page protections' count that
change_protection() computes and skip the TLB flush if there's
no changes to a range that sys_mprotect() modifies.
Given that mprotect() already optimizes the same-flags case
I expected this optimization to
tldr: Benchmarkers, only test patches 1-37. If there is instability,
it may be due to the native THP migration patch and test with
1-36. Please report any results or problems you find.
In terms of merging, I would also only consider patches 1-37.
git tree:
From: Peter Zijlstra
This will be used for three kinds of purposes:
- to optimize mprotect()
- to speed up working set scanning for working set areas that
have not been touched
- to more accurately scan per real working set
No change in functionality from this patch.
Suggested-by:
The pgmigrate_success and pgmigrate_fail vmstat counters tells the user
about migration activity but not the type or the reason. This patch adds
a tracepoint to identify the type of page migration and why the page is
being migrated.
Signed-off-by: Mel Gorman
Reviewed-by: Rik van Riel
---
From: Andrea Arcangeli
The objective of _PAGE_NUMA is to be able to trigger NUMA hinting page
faults to identify the per NUMA node working set of the thread at
runtime.
Arming the NUMA hinting page fault mechanism works similarly to
setting up a mprotect(PROT_NONE) virtual range: the present
From: Andrea Arcangeli
When we split a transparent hugepage, transfer the NUMA type from the
pmd to the pte if needed.
Signed-off-by: Andrea Arcangeli
Signed-off-by: Mel Gorman
Reviewed-by: Rik van Riel
---
mm/huge_memory.c |2 ++
1 file changed, 2 insertions(+)
diff --git
Note: This patch started as "mm/mpol: Create special PROT_NONE
infrastructure" and preserves the basic idea but steals *very*
heavily from "autonuma: numa hinting page faults entry points" for
the actual fault handlers without the migration parts. The end
result is
From: Peter Zijlstra
Make MPOL_LOCAL a real and exposed policy such that applications that
relied on the previous default behaviour can explicitly request it.
Requested-by: Christoph Lameter
Reviewed-by: Rik van Riel
Cc: Lee Schermerhorn
Cc: Andrew Morton
Cc: Linus Torvalds
Signed-off-by:
Note: Based on "mm/mpol: Use special PROT_NONE to migrate pages" but
sufficiently different that the signed-off-bys were dropped
Combine our previous _PAGE_NUMA, mpol_misplaced and migrate_misplaced_page()
pieces into an effective migrate on fault scheme.
Note that (on x86) we rely on
From: Peter Zijlstra
Note: This was originally based on Peter's patch "mm/migrate: Introduce
migrate_misplaced_page()" but borrows extremely heavily from Andrea's
"autonuma: memory follows CPU algorithm and task/mm_autonuma stats
collection". The end result is barely
From: Lee Schermerhorn
This patch provides a new function to test whether a page resides
on a node that is appropriate for the mempolicy for the vma and
address where the page is supposed to be mapped. This involves
looking up the node where the page belongs. So, the function
returns that node
From: Andrea Arcangeli
Introduce FOLL_NUMA to tell follow_page to check
pte/pmd_numa. get_user_pages must use FOLL_NUMA, and it's safe to do
so because it always invokes handle_mm_fault and retries the
follow_page later.
KVM secondary MMU page faults will trigger the NUMA hinting page
faults
The compact_pages_moved and compact_pagemigrate_failed events are
convenient for determining if compaction is active and to what
degree migration is succeeding but it's at the wrong level. Other
users of migration may also want to know if migration is working
properly and this will be particularly
From: Rik van Riel
If ptep_clear_flush() is called to clear a page table entry that is
accessible anyway by the CPU, eg. a _PAGE_PROTNONE page table entry,
there is no need to flush the TLB on remote CPUs.
Signed-off-by: Rik van Riel
Signed-off-by: Peter Zijlstra
Cc: Linus Torvalds
Cc:
From: Lee Schermerhorn
NOTE: I have not yet addressed by own review feedback of this patch. At
this point I'm trying to construct a baseline tree and will apply
my own review feedback later and then fold it in.
This patch augments the MPOL_MF_LAZY feature by adding a "NOOP"
From: Lee Schermerhorn
This patch converts change_prot_numa() to use change_protection(). As
pte_numa and friends check the PTE bits directly it is necessary for
change_protection() to use pmd_mknuma(). Hence the required
modifications to change_protection() are a little clumsy but the
end
From: Peter Zijlstra
NOTE: This patch is based on "sched, numa, mm: Add fault driven
placement and migration policy" but as it throws away all the policy
to just leave a basic foundation I had to drop the signed-offs-by.
This patch creates a bare-bones method for setting PTEs
From: Peter Zijlstra
Previously, to probe the working set of a task, we'd use
a very simple and crude method: mark all of its address
space PROT_NONE.
That method has various (obvious) disadvantages:
- it samples the working set at dissimilar rates,
giving some tasks a sampling quality
From: Peter Zijlstra
Add a 1 second delay before starting to scan the working set of
a task and starting to balance it amongst nodes.
[ note that before the constant per task WSS sampling rate patch
the initial scan would happen much later still, in effect that
patch caused this regression.
It is tricky to quantify the basic cost of automatic NUMA placement in a
meaningful manner. This patch adds some vmstats that can be used as part
of a basic costing model.
u= basic unit = sizeof(void *)
Ca = cost of struct page access = sizeof(struct page) / u
Cpte = Cost PTE access = Ca
This is the simplest possible policy that still does something of note.
When a pte_numa is faulted, it is moved immediately. Any replacement
policy must at least do better than this and in all likelihood this
policy regresses normal workloads.
Signed-off-by: Mel Gorman
Acked-by: Rik van Riel
NOTE: This is very heavily based on similar logic in autonuma. It should
be signed off by Andrea but because there was no standalone
patch and it's sufficiently different from what he did that
the signed-off is omitted. Will be added back if requested.
If a large number of
If there are a large number of NUMA hinting faults and all of them
are resulting in migrations it may indicate that memory is just
bouncing uselessly around. NUMA balancing cost is likely exceeding
any benefit from locality. Rate limit the PTE updates if the node
is migration rate-limited. As
While it is desirable that all threads in a process run on its home
node, this is not always possible or necessary. There may be more
threads than exist within the node or the node might over-subscribed
with unrelated processes.
This can cause a situation whereby a page gets migrated off its home
To say that the PMD handling code was incorrectly transferred from autonuma
is an understatement. The intention was to handle a PMDs worth of pages
in the same fault and effectively batch the taking of the PTL and page
migration. The copied version instead has the impact of clearing a number
of
By accounting against the present PTEs, scanning speed reflects the
actual present (mapped) memory.
Suggested-by: Ingo Molnar
Signed-off-by: Peter Zijlstra
Cc: Linus Torvalds
Cc: Andrew Morton
Cc: Peter Zijlstra
Cc: Andrea Arcangeli
Cc: Rik van Riel
Cc: Mel Gorman
Signed-off-by: Ingo
Currently the rate of scanning for an address space is controlled
by the individual tasks. The next scan is simply determined by
2*p->numa_scan_period.
The 2*p->numa_scan_period is arbitrary and never changes. At this point
there is still no proper policy that decides if a task or process is
From: Peter Zijlstra
Its a bit awkward but it was the least painful means of modifying the
queue selection. Used in the next patch to conditionally use a queue.
Signed-off-by: Peter Zijlstra
Cc: Paul Turner
Cc: Lee Schermerhorn
Cc: Christoph Lameter
Cc: Rik van Riel
Cc: Andrew Morton
Cc:
NOTE: Entirely on "sched, numa, mm: Implement home-node awareness" but
only a subset of it. There was stuff in there that was disabled
by default and generally did slightly more than what I felt was
necessary at this stage. In particular the random queue selection
NOTE: This is heavily based on "autonuma: CPU follows memory algorithm"
and "autonuma: mm_autonuma and task_autonuma data structures"
At the most basic level, any placement policy is going to make some
sort of smart decision based on per-mm and per-task statistics. This
patch simply
Rename the policy to reflect that while allocations and migrations are
based on reference that the home node is taken into account for
migration decisions.
Signed-off-by: Mel Gorman
---
include/uapi/linux/mempolicy.h |9 -
mm/mempolicy.c |9 ++---
2 files
Note: This is very heavily based on a patch from Peter Zijlstra with
fixes from Ingo Molnar, Hugh Dickins and Johannes Weiner. That patch
put a lot of migration logic into mm/huge_memory.c where it does
not belong. This version puts tries to share some of the migration
From: Peter Zijlstra
Introduce the home-node concept for tasks. In order to keep memory
locality we need to have a something to stay local to, we define the
home-node of a task as the node we prefer to allocate memory from and
prefer to execute on.
These are no hard guarantees, merely soft
From: Hillf Danton
Node is selected on behalf of given task, but no reason to punish
the currently running tasks on other nodes. That punishment maybe benifit,
who knows. Better if they are treated not in random way.
Signed-off-by: Hillf Danton
---
kernel/sched/fair.c | 15 ++-
---
kernel/sched/fair.c | 112 +++
1 file changed, 15 insertions(+), 97 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5cc5b60..fd53f17 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -877,118 +877,36 @@
The implementation of CPU follows memory was intended to reflect
the considerations made by autonuma on the basis that it had the
best performance figures at the time of writing. However, a major
criticism was the use of kernel threads and the impact of the
cost of the load balancer paths. As a
This patch introduces a last_nid field to the page struct. This is used
to build a two-stage filter in the next patch that is aimed at
mitigating a problem whereby pages migrate to the wrong node when
referenced by a process that was running off its home node.
Signed-off-by: Mel Gorman
---
NOTE: This is heavily based on "autonuma: CPU follows memory algorithm"
and "autonuma: mm_autonuma and task_autonuma data structures"
with bits taken but worked within the scheduler hooks and home
node mechanism as defined by schednuma.
This patch adds per-mm and per-task
From: Andrea Arcangeli
This defines the per-node data used by Migrate On Fault in order to
rate limit the migration. The rate limiting is applied independently
to each destination node.
Signed-off-by: Andrea Arcangeli
Signed-off-by: Mel Gorman
---
include/linux/mmzone.h | 13 +
The use of MPOL_NOOP and MPOL_MF_LAZY to allow an application to
explicitly request lazy migration is a good idea but the actual
API has not been well reviewed and once released we have to support it.
For now this patch prevents an application using the services. This
will need to be revisited.
From: Lee Schermerhorn
NOTE: Once again there is a lot of patch stealing and the end result
is sufficiently different that I had to drop the signed-offs.
Will re-add if the original authors are ok with that.
This patch adds another mbind() flag to request "lazy migration". The
From: Andrea Arcangeli
Implement pte_numa and pmd_numa.
We must atomically set the numa bit and clear the present bit to
define a pte_numa or pmd_numa.
Once a pte or pmd has been set as pte_numa or pmd_numa, the next time
a thread touches a virtual address in the corresponding virtual range,
a
Compaction already has tracepoints to count scanned and isolated pages
but it requires that ftrace be enabled and if that information has to be
written to disk then it can be disruptive. This patch adds vmstat counters
for compaction called compact_migrate_scanned, compact_free_scanned and
From: Rik van Riel
The function ptep_set_access_flags() is only ever invoked to set access
flags or add write permission on a PTE. The write bit is only ever set
together with the dirty bit.
Because we only ever upgrade a PTE, it is safe to skip flushing entries on
remote TLBs. The worst that
On 11/21/2012 10:59 AM, Linus Walleij wrote:
On Mon, Nov 19, 2012 at 10:39 AM, Sebastian Hesselbarth
wrote:
This patch relies on a patch set for mvebu pinctrl taken through
Linus' pinctrl branch. As there is no other platform than Dove
involved, I suggest to take it though Jason's tree to
On Tue, Nov 20, 2012 at 8:59 PM, richard -rw- weinberger
wrote:
> On Tue, Nov 20, 2012 at 11:39 PM, Ezequiel Garcia
> wrote:
>> Block device emulation on top of ubi volumes with read/write support.
>> Block devices get automatically created for each ubi volume present.
>>
>> Each ubiblock is
> Use the value obtained from the function instead of -EINVAL.
>
> Signed-off-by: Sachin Kamat
Acked-by: MyungJoo Ham
Both patches applied as they are obvious bugfixes.
I'll send pull request with other bugfixes within days.
> ---
> drivers/devfreq/devfreq.c |2 +-
> 1 files changed, 1
Le 20/11/2012 15:25, Dan Carpenter a écrit :
> On Tue, Nov 20, 2012 at 02:26:42PM +0100, MAACHE Mehdi wrote:
>> This is a patch to the r8180_wx.c file that fixes up some warnings and
>> errors found by the checkpatch.pl tool
>> - WARNING: line over 80 characters
>> - ERROR: "(foo*)" should be
opp_get_notifier() uses find_device_opp(), which requires to
held rcu_read_lock. In order to keep the notifier-header
valid, we have added rcu_read_lock().
Reported-by: Kees Cook
Signed-off-by: MyungJoo Ham
---
drivers/devfreq/devfreq.c | 26 --
1 files changed, 20
On Wed, 21 Nov 2012 10:06:16 +, Grant Likely wrote:
> On Wed, Nov 21, 2012 at 9:53 AM, Arnd Bergmann wrote:
> > On Tuesday 20 November 2012, Stephen Warren wrote:
> >> However, this results in iterating over table twice; the second time
> >> inside of_match_node(). The implementation of
On Mon, Nov 19, 2012 at 11:41:38PM -0800, Darrick J. Wong wrote:
> Hi,
>
> Fsyncing is tricky business, so factor out the bits of the xfs_file_fsync
> function that can be used from the I/O post-processing path.
Why would we need to skip the filemap_write_and_wait_range call here?
If we're doing
On Mon, Nov 19, 2012 at 11:41:23PM -0800, Darrick J. Wong wrote:
> Provide VFS helpers for handling O_SYNC AIO DIO writes. Filesystems wanting
> to
> use the helpers have to pass DIO_SYNC_WRITES to __blockdev_direct_IO. If the
> filesystem doesn't provide its own direct IO end_io handler, the
On Wed, Nov 21, 2012 at 12:00 PM, Jaegeuk Hanse wrote:
>
> On 11/21/2012 05:58 PM, metin d wrote:
>
> Hi Fengguang,
>
> I run tests and attached the results. The line below I guess shows the data-1
> page caches.
>
> 0x0008006c 658405125718
>
On Wed, Nov 21, 2012 at 9:53 AM, Arnd Bergmann wrote:
> On Tuesday 20 November 2012, Stephen Warren wrote:
>> However, this results in iterating over table twice; the second time
>> inside of_match_node(). The implementation of for_each_matching_node()
>> already found the match, so this is
On 21 November 2012 14:12, Andy Shevchenko
wrote:
> Currently the direction value comes from the generic slave configuration
> structure and explicitly as a preparation function parameter. The first one is
> kinda obsoleted. Thus, we have to store the value passed to the preparation
> function
301 - 400 of 946 matches
Mail list logo