an independent function to calculate page
> address from thread_info one.
>
> Suggested-by: Sungjinn Chung
> Signed-off-by: Jungseok Lee
> Cc: KOSAKI Motohiro
> Cc: linux-arm-ker...@lists.infradead.org
> ---
> kernel/fork.c | 7 ++-
> 1 file changed, 6 insertions(+
the original patch. I would like to see the mmap man page adjusted
> to make note of this behavior as well.
This is just a bug fix and I never think this has large risk. But
caution, we might revert immediately
if this patch arise some regression even if it's come from broken
application
On Fri, Dec 12, 2014 at 11:05 AM, KOSAKI Motohiro
wrote:
> On Fri, Dec 12, 2014 at 5:19 AM, Lai Jiangshan wrote:
>> A pwq bound to a specified node might be last long or even forever after
>> the node was offline. Especially when this pwq has some back-to-back work
>>
On Fri, Dec 12, 2014 at 5:19 AM, Lai Jiangshan wrote:
> A pwq bound to a specified node might be last long or even forever after
> the node was offline. Especially when this pwq has some back-to-back work
> items which requeue themselves and cause the pwq can't quit.
>
> This kinds of pwqs will c
> I agree that reporting the amount of shared pages in that historically fashion
> might not be interesting for userspace tools resorting to sysinfo(2),
> nowadays.
>
> OTOH, our documentation implies we do return shared memory there, and FWIW,
> considering the other places we do export the "share
> -Original Message-
> From: Rafael Aquini [mailto:aqu...@redhat.com]
> Sent: Wednesday, June 25, 2014 4:16 PM
> To: Motohiro Kosaki
> Cc: linux...@kvack.org; Andrew Morton; Rik van Riel; Mel Gorman; Johannes
> Weiner; Motohiro Kosaki JP; linux-
> ker...@vger.ker
> -Original Message-
> From: motohiro.kos...@us.fujitsu.com [mailto:motohiro.kos...@us.fujitsu.com]
> Sent: Wednesday, June 25, 2014 3:41 PM
> To: Rafael Aquini; linux...@kvack.org
> Cc: Andrew Morton; Rik van Riel; Mel Gorman; Johannes Weiner; Motohiro Kosaki
>
> -Original Message-
> From: Rafael Aquini [mailto:aqu...@redhat.com]
> Sent: Wednesday, June 25, 2014 2:40 PM
> To: linux...@kvack.org
> Cc: Andrew Morton; Rik van Riel; Mel Gorman; Johannes Weiner; Motohiro Kosaki
> JP; linux-kernel@vger.kernel.org
> Subjec
> -Original Message-
> From: Minchan Kim [mailto:minc...@kernel.org]
> Sent: Monday, June 23, 2014 2:16 AM
> To: Johannes Weiner
> Cc: Andrew Morton; Mel Gorman; Rik van Riel; Michal Hocko;
> linux...@kvack.org; linux-kernel@vger.kernel.org; Motohiro Kosaki JP
> Su
> -Original Message-
> From: Tetsuo Handa [mailto:penguin-ker...@i-love.sakura.ne.jp]
> Sent: Tuesday, May 20, 2014 11:58 PM
> To: da...@fromorbit.com; r...@redhat.com
> Cc: Motohiro Kosaki JP; fengguang...@intel.com;
> kamezawa.hir...@jp.fujitsu.com; a...@linux-f
> -Original Message-
> From: Rik van Riel [mailto:r...@redhat.com]
> Sent: Tuesday, April 29, 2014 3:19 PM
> To: linux-kernel@vger.kernel.org
> Cc: linux...@kvack.org; sand...@redhat.com; a...@linux-foundation.org;
> jwei...@redhat.com; Motohiro Kosaki JP;
> mho.
r overflow.
> >
> > Signed-off-by: Manfred Spraul
>
> Acked-by: Davidlohr Bueso
Acked-by: KOSAKI Motohiro
> > find_vma_intersection does not work as intended if addr+size overflows.
> > The patch adds a manual check before the call to find_vma_intersection.
> >
> > Signed-off-by: Manfred Spraul
>
> Acked-by: Davidlohr Bueso
Acked-by: KOSAKI Motohiro
> -Original Message-
> From: Manfred Spraul [mailto:manf...@colorfullife.com]
> Sent: Monday, April 21, 2014 10:27 AM
> To: Davidlohr Bueso; Michael Kerrisk; Martin Schwidefsky
> Cc: LKML; Andrew Morton; KAMEZAWA Hiroyuki; Motohiro Kosaki JP;
> gthe...@google.com; a
; The patch adds a detection for overflows.
> >
> > Signed-off-by: Manfred Spraul
>
> Acked-by: Davidlohr Bueso
Acked-by: KOSAKI Motohiro
> -Original Message-
> From: Manfred Spraul [mailto:manf...@colorfullife.com]
> Sent: Friday, April 18, 2014 2:19 AM
> To: Andrew Morton; Davidlohr Bueso
> Cc: LKML; KAMEZAWA Hiroyuki; Motohiro Kosaki JP; gthe...@google.com;
> as...@hp.com; linux...@kvack.o
> -Original Message-
> From: Masami Hiramatsu [mailto:masami.hiramatsu...@hitachi.com]
> Sent: Wednesday, April 16, 2014 8:13 PM
> To: linux-kernel@vger.kernel.org; Vivek Goyal; Eric Biederman
> Cc: Andrew Morton; Yoshihiro YUNOMAE; Satoru MORIYA; Motohiro Kosaki; Tom
>> This change hwpoison and migration tag number. maybe ok, maybe not.
>
> Though depending on config can't these tag numbers change anyway?
I don't think distro disable any of these.
>> I'd suggest to use younger number than hwpoison.
>> (That's why hwpoison uses younger number than migration)
On Fri, Apr 4, 2014 at 1:00 AM, Davidlohr Bueso wrote:
> On Thu, 2014-04-03 at 19:39 -0400, KOSAKI Motohiro wrote:
>> On Thu, Apr 3, 2014 at 3:50 PM, Davidlohr Bueso wrote:
>> > On Thu, 2014-04-03 at 21:02 +0200, Manfred Spraul wrote:
>> >> Hi Davidlohr,
>
> This change allows Linux to treat shm just as regular anonymous memory.
> One important difference between them, though, is handling out-of-memory
> conditions: as opposed to regular anon memory, the OOM killer will not
> kill processes that are hogging memory through shm, allowing users to
> pot
On Thu, Apr 3, 2014 at 3:50 PM, Davidlohr Bueso wrote:
> On Thu, 2014-04-03 at 21:02 +0200, Manfred Spraul wrote:
>> Hi Davidlohr,
>>
>> On 04/03/2014 02:20 AM, Davidlohr Bueso wrote:
>> > The default size for shmmax is, and always has been, 32Mb.
>> > Today, in the XXI century, it seems that this
On Wed, Apr 2, 2014 at 12:09 PM, Dave Hansen wrote:
> On 04/02/2014 01:56 AM, Li Zhong wrote:
>> I noticed the phys_index and end_phys_index under
>> /sys/devices/system/memory/memoryXXX/ have the same value, e.g.
>> (for the test machine, one memory block has 8 sections, that is
>> sections_per_
shm_ctlmax)
> + if (ns->shm_ctlmax &&
> + (size < SHMMIN || size > ns->shm_ctlmax))
> return -EINVAL;
>
> - if (ns->shm_tot + numpages > ns->shm_ctlall)
> + if (ns->shm_ctlall &&
> + n
>> > Ah-hah, that's interesting info.
>> >
>> > Let's make the default 64GB?
>>
>> 64GB is infinity at that time, but it no longer near infinity today. I like
>> very large or total memory proportional number.
>
> So I still like 0 for unlimited. Nice, clean and much easier to look at
> than ULONG_
On Tue, Apr 1, 2014 at 5:48 PM, Andrew Morton wrote:
> On Tue, 1 Apr 2014 17:41:54 -0400 KOSAKI Motohiro
> wrote:
>
>> >> > Hmmm so 0 won't really work because it could be weirdly used to disable
>> >> > shm altogether... we cannot go to some negat
>> > Hmmm so 0 won't really work because it could be weirdly used to disable
>> > shm altogether... we cannot go to some negative value either since we're
>> > dealing with unsigned, and cutting the range in half could also hurt
>> > users that set the limit above that. So I was thinking of simply
On Tue, Apr 1, 2014 at 5:01 PM, Davidlohr Bueso wrote:
> On Tue, 2014-04-01 at 15:51 -0400, KOSAKI Motohiro wrote:
>> >> So, I personally like 0 byte per default.
>> >
>> > If by this you mean 0 bytes == unlimited, then I agree. It's less harsh
>>
>> Our middleware engineers has been complaining about this sysctl limit.
>> System administrator need to calculate required sysctl value by making sum
>> of all planned middlewares, and middleware provider needs to write "please
>> calculate systcl param by." in their installation manuals.
>
>
On Tue, Apr 1, 2014 at 2:31 PM, Davidlohr Bueso wrote:
> On Tue, 2014-04-01 at 14:10 -0400, KOSAKI Motohiro wrote:
>> On Tue, Apr 1, 2014 at 1:01 PM, Davidlohr Bueso wrote:
>> > On Mon, 2014-03-31 at 17:05 -0700, Andrew Morton wrote:
>> >> On Mon, 31 Mar 2014
On Tue, Apr 1, 2014 at 1:01 PM, Davidlohr Bueso wrote:
> On Mon, 2014-03-31 at 17:05 -0700, Andrew Morton wrote:
>> On Mon, 31 Mar 2014 16:25:32 -0700 Davidlohr Bueso wrote:
>>
>> > On Mon, 2014-03-31 at 16:13 -0700, Andrew Morton wrote:
>> > > On Mon, 31 Mar 2014 15:59:33 -0700 Davidlohr Bueso
On Fri, Mar 21, 2014 at 2:17 PM, John Stultz wrote:
> One issue that some potential users were concerned about, was that
> they wanted to ensure that all the pages from one volatile range
> were purged before we purge pages from a different volatile range.
> This would prevent the case where they
ner
> Cc: Robert Love
> Cc: Mel Gorman
> Cc: Hugh Dickins
> Cc: Dave Hansen
> Cc: Rik van Riel
> Cc: Dmitry Adamushko
> Cc: Neil Brown
> Cc: Andrea Arcangeli
> Cc: Mike Hommey
> Cc: Taras Glek
> Cc: Jan Kara
> Cc: KOSAKI Motohiro
> Cc: Michel
On Sun, Mar 23, 2014 at 1:26 PM, John Stultz wrote:
> On Sun, Mar 23, 2014 at 10:50 AM, KOSAKI Motohiro
> wrote:
>>> +/**
>>> + * vrange_check_purged_pte - Checks ptes for purged pages
>>> + *
>>> + * Iterates over the ptes in the pmd checkin
> +/**
> + * vrange_check_purged_pte - Checks ptes for purged pages
> + *
> + * Iterates over the ptes in the pmd checking if they have
> + * purged swap entries.
> + *
> + * Sets the vrange_walker.pages_purged to 1 if any were purged.
> + */
> +static int vrange_check_purged_pte(pmd_t *pmd, unsign
Cc: Hugh Dickins
> Cc: Dave Hansen
> Cc: Rik van Riel
> Cc: Dmitry Adamushko
> Cc: Neil Brown
> Cc: Andrea Arcangeli
> Cc: Mike Hommey
> Cc: Taras Glek
> Cc: Jan Kara
> Cc: KOSAKI Motohiro
> Cc: Michel Lespinasse
> Cc: Minchan Kim
> Cc: linu
EFAULT Purged pointer is invalid
>
> This a simplified implementation which reuses some of the logic
> from Minchan's earlier efforts. So credit to Minchan for his work.
>
> Cc: Andrew Morton
> Cc: Android Kernel Team
> Cc: Johannes Weiner
> Cc: Robert Love
&g
> Mike,
>
> There are several problem domains where you protect critical sections by
> assigning multiple threads to a single CPU and use priorities
> and SCHED_FIFO to ensure data integrity.
>
> In this kind of design you don't make many syscalls. The ones you do make,
> have to be clearly un
>> + /*
>> + * Ensure that the page's data was copied from old one by
>> + * aio_migratepage().
>> + */
>> + smp_rmb();
>> +
>
> smp_read_barrier_depends() is better.
>
> "One could place an A smp_rmb() primitive between the pointer fet
On Mon, Feb 24, 2014 at 11:13 AM, Konrad Rzeszutek Wilk
wrote:
> On Sat, Feb 22, 2014 at 11:53:31PM -0800, Steven Noonan wrote:
>> On Fri, Feb 21, 2014 at 12:07 PM, Konrad Rzeszutek Wilk
>> wrote:
>> > On Thu, Feb 20, 2014 at 12:44:15PM -0800, Steven Noonan wrote:
>> >> On Wed, Feb 19, 2014 at 1:
> -Original Message-
> From: Andrew Morton [mailto:a...@linux-foundation.org]
> Sent: Tuesday, February 11, 2014 5:51 AM
> To: Johannes Weiner
> Cc: Rik van Riel; Dave Hansen; Michal Hocko; Motohiro Kosaki JP; KAMEZAWA
> Hiroyuki; linux...@kvack.org; linux-kern
From: KOSAKI Motohiro
To use spin_{un}lock_irq is dangerous if caller disabled interrupt.
During aio buffer migration, we have a possibility to see the
following call stack.
aio_migratepage [disable interrupt]
migrate_page_copy
clear_page_dirty_for_io
set_page_dirty
> Indeed, good catch. Do we need the same treatment for
> __set_page_dirty_buffers() that can be called by way of
> clear_page_dirty_for_io()?
Indeed. I posted a patch fixed __set_page_dirty() too. plz see
Subject: [PATCH] __set_page_dirty uses spin_lock_irqsave instead of
spin_lock_irq
--
To un
From: KOSAKI Motohiro
To use spin_{un}lock_irq is dangerous if caller disabled interrupt.
spin_lock_irqsave is a safer alternative. Luckily, now there is no
caller that has such usage but it would be nice to fix.
Reported-by: David Rientjes rient...@google.com>
Signed-off-by: KOSAKI Motoh
On Tue, Feb 4, 2014 at 11:58 AM, wrote:
> From: KOSAKI Motohiro
>
> To use spin_{un}lock_irq is dangerous if caller disabled interrupt.
> spin_lock_irqsave is a safer alternative. Luckily, now there is no
> caller that has such usage but it would be nice to fix.
>
> Report
From: KOSAKI Motohiro
During aio stress test, we observed the following lockdep warning.
This mean AIO+numa_balancing is currently deadlockable.
The problem is, aio_migratepage disable interrupt, but
__set_page_dirty_nobuffers
unintentionally enable it again.
Generally, all helper function
Hi Minchan,
On Thu, Jan 2, 2014 at 2:12 AM, Minchan Kim wrote:
> Hey all,
>
> Happy New Year!
>
> I know it's bad timing to send this unfamiliar large patchset for
> review but hope there are some guys with freshed-brain in new year
> all over the world. :)
> And most important thing is that bef
wn
> Cc: "Rafael J. Wysocki"
> Cc: Linn Crosetto
> Cc: Pekka Enberg
> Cc: Yinghai Lu
> Cc: Andrew Morton
> Cc: Toshi Kani
> Cc: Tang Chen
> Cc: Wen Congyang
> Cc: Vivek Goyal
> Cc: kosaki.motoh...@gmail.com
> Cc: dyo...@redhat.com
> Cc: Toshi Ka
On Sun, Jan 12, 2014 at 6:46 PM, Prarit Bhargava wrote:
>
>
> On 01/11/2014 11:35 AM, 7egg...@gmx.de wrote:
>>
>>
>> On Fri, 10 Jan 2014, Prarit Bhargava wrote:
>>
>>> kdump uses memmap=exactmap and mem=X values to configure the memory
>>> mapping for the kdump kernel. If memory is hotadded durin
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 068522d..b99c742 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1389,9 +1389,19 @@ static int try_to_unmap_cluster(unsigned long
> cursor, unsigned int *mapcount,
> BUG_ON(!page || PageAnon(page));
>
> if (locked_vma) {
> -
On Thu, Jan 9, 2014 at 10:00 AM, Vivek Goyal wrote:
> On Thu, Jan 09, 2014 at 12:00:29AM +0100, Rafael J. Wysocki wrote:
>
> [..]
>> > The system then panics and the kdump/kexec kernel boots. During this boot
>> > ACPi is initialized and the kernel (as can be seen above)
>>
>> Which is a bug. Yo
On Mon, Jan 6, 2014 at 11:47 AM, Motohiro Kosaki
wrote:
>
>
>> -Original Message-
>> From: linus...@gmail.com [mailto:linus...@gmail.com] On Behalf Of Linus
>> Torvalds
>> Sent: Friday, January 03, 2014 7:18 PM
>> To: Vlastimil Babka
>> Cc: Sash
> -Original Message-
> From: linus...@gmail.com [mailto:linus...@gmail.com] On Behalf Of Linus
> Torvalds
> Sent: Friday, January 03, 2014 7:18 PM
> To: Vlastimil Babka
> Cc: Sasha Levin; Andrew Morton; Wanpeng Li; Michel Lespinasse; Bob Liu;
> Nick Piggin; Motohi
nsigned int filter)
> printk("%lu pages in pagetable cache\n",
> quicklist_total_size());
> #endif
> +#ifdef CONFIG_MEMORY_FAILURE
> + printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages));
> +#endif
> }
Looks ok.
A
dead, there can't be
> any reader left anyway.
>
> Then again this caching seem to complicate the code for
> corner cases that are probably not worth it. So lets get
> rid of it.
>
> Also remove the sample snapshot on dying process timer
> that is now useless, as sugge
> @@ -1090,13 +1063,8 @@ void posix_cpu_timer_schedule(struct k_itimer *timer)
> timer->it.cpu.expires = 0;
> goto out_unlock;
> } else if (unlikely(p->exit_state) && thread_group_empty(p)) {
> - /*
> -
t not that this
> will add the subtle change. CLONE_THREAD doesn't require CLONE_FS, so
> copy_fs() can fail even it the caller doesn't share ->fs with the execing
> thread. And we still need fs->lock to set signal->in_exec, this looks
> a bit strange.
Oops. Yes, this is
(11/22/2013 3:33 PM), Oleg Nesterov wrote:
> On 11/22, KOSAKI Motohiro wrote:
>>
>> (11/22/2013 12:54 PM), Oleg Nesterov wrote:
>>> We can kill either task->did_exec or PF_FORKNOEXEC, they are
>>> mutually exclusive. The patch kill ->did_exec because
(11/22/2013 3:24 PM), Oleg Nesterov wrote:
> On 11/22, KOSAKI Motohiro wrote:
>>
>> (11/22/2013 12:54 PM), Oleg Nesterov wrote:
>>> next_thread() should be avoided, change check_unsafe_exec()
>>> to use while_each_thread(). This also saves 32 bytes.
>>
>&
(11/22/2013 12:54 PM), Oleg Nesterov wrote:
> Both success/failure paths cleanup bprm->file, we can move this
> code into free_bprm() to simlify and cleanup this logic.
>
> Signed-off-by: Oleg Nesterov
Acked-by: KOSAKI Motohiro
--
To unsubscribe from this list: send the li
(11/22/2013 12:54 PM), Oleg Nesterov wrote:
> fs_struct->in_exec == T means that this ->fs is used by a single
> process (thread group), and one of the treads does do_execve().
>
> To avoid the mt-exec races this code has the following complications:
>
> 1. check_unsafe_exec() returns -EBUS
(11/22/2013 12:54 PM), Oleg Nesterov wrote:
> next_thread() should be avoided, change check_unsafe_exec()
> to use while_each_thread(). This also saves 32 bytes.
Just curious.
Why it should be avoided? Just for cleaner code? Or is there
serious issue?
--
To unsubscribe from this list: send the li
(11/22/2013 12:54 PM), Oleg Nesterov wrote:
> We can kill either task->did_exec or PF_FORKNOEXEC, they are
> mutually exclusive. The patch kill ->did_exec because it has
> a single user.
It's ok.
but,
> - * Auch. Had to add the 'did_exec' flag to conform completely to POSIX.
> - * LBT 04.03.94
>
On Tue, Nov 19, 2013 at 4:49 PM, Andi Kleen wrote:
> Michal Hocko writes:
>>
>> Another option would be to use sysctl values for the top cpuset as a
>> default. But then why not just do it manually without sysctl?
>
> I want to provide an alternative to having to use cpusets to use this,
> that i
ormance increase his test case.
>
> Reported-by: Larry Woodman
> Suggested-by: Paul Turner
> Signed-off-by: Peter Zijlstra
> Cc: KOSAKI Motohiro
> Cc: Linus Torvalds
> Cc: Andrew Morton
> Link:
> http://lkml.kernel.org/r/2013172925.gg26...@twins.programming
>> I'm slightly surprised this cache makes 15% hit. Which application
>> get a benefit? You listed a lot of applications, but I'm not sure
>> which is highly depending on largest vma.
>
> Well I chose the largest vma because it gives us a greater chance of
> being already cached when we do the look
(11/1/13 4:17 PM), Davidlohr Bueso wrote:
While caching the last used vma already does a nice job avoiding
having to iterate the rbtree in find_vma, we can improve. After
studying the hit rate on a load of workloads and environments,
it was seen that it was around 45-50% - constant for a standard
(11/1/13 3:54 AM), Yuanhan Liu wrote:
> Patch 1 turns locking the anon_vma's root to locking itself to let it be
> a per anon_vma lock, which would reduce contentions.
>
> In the same time, lock range becomes quite small then, which is bascially
> a call of anon_vma_interval_tree_insert(). Patch 2
From: KOSAKI Motohiro
When __rmqueue_fallback() don't find out a free block with the same size
of required, it splits a larger page and puts back rest peiece of the page
to free list.
But it has one serious mistake. When putting back, __rmqueue_fallback()
always use start_migratetype if ty
Nit. I would like to add following hunk. This is just nit because moving
reserve pageblock is extreme rare.
if (block_migratetype == MIGRATE_RESERVE) {
+ found++;
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
(10/31/13 12:24 AM), kosaki.motoh...@gmail.com wrote:
> From: KOSAKI Motohiro
>
> When __rmqueue_fallback() don't find out a free block with the same size
> of required, it splits a larger page and puts back rest peiece of the page
> to free list.
>
> But it has
(10/31/13 12:35 AM), Andrew Morton wrote:
On Thu, 31 Oct 2013 00:24:49 -0400 kosaki.motoh...@gmail.com wrote:
When __rmqueue_fallback() don't find out a free block with the same size
of required, it splits a larger page and puts back rest peiece of the page
to free list.
But it has one serious
x27;t used such information on my long oom debugging history.
Acked-by: KOSAKI Motohiro
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Ple
From: KOSAKI Motohiro
When __rmqueue_fallback() don't find out a free block with the same size
of required, it splits a larger page and puts back rest peiece of the page
to free list.
But it has one serious mistake. When putting back, __rmqueue_fallback()
always use start_migratetype if ty
@@ -3926,11 +3929,11 @@ static void setup_zone_migrate_reserve(struct zone
*zone)
/*
* Reserve blocks are generally in place to help high-order atomic
* allocations that are short-lived. A min_free_kbytes value that
-* would result in more than 2 reserve blocks f
(10/30/13 11:19 AM), Mel Gorman wrote:
On Wed, Oct 23, 2013 at 05:01:32PM -0400, kosaki.motoh...@gmail.com wrote:
From: KOSAKI Motohiro
Yasuaki Ithimatsu reported memory hot-add spent more than 5 _hours_
on 9TB memory machine and we found out setup_zone_migrate_reserve
spnet >90% time.
> The concern is likely/unlikely usage is proper in this code peice.
> If we don't use memory isolation, the code path is used for only
> MIGRATE_RESERVE which is very rare allocation in normal workload.
>
> Even, in memory isolation environement, I'm not sure how many
> CMA/HOTPLUG is used compar
From: KOSAKI Motohiro
Currently, set_pageblock_migratetype screw up MIGRATE_CMA and
MIGRATE_ISOLATE if page_group_by_mobility_disabled is true. It
rewrite the argument to MIGRATE_UNMOVABLE and we lost these attribute.
The problem was introduced commit 49255c619f (page allocator: move
check for
From: KOSAKI Motohiro
In general, every tracepoint should be zero overhead if it is disabled.
However, trace_mm_page_alloc_extfrag() is one of exception. It evaluate
"new_type == start_migratetype" even if tracepoint is disabled.
However, the code can be moved into tracepoint's
From: KOSAKI Motohiro
Yasuaki Ithimatsu reported memory hot-add spent more than 5 _hours_
on 9TB memory machine and we found out setup_zone_migrate_reserve
spnet >90% time.
The problem is, setup_zone_migrate_reserve scan all pageblock
unconditionally, but it is only necessary number of reser
On 10/18/2013 6:39 PM, John Stultz wrote:
> On 10/17/2013 06:12 PM, KOSAKI Motohiro wrote:
>> (10/17/13 1:05 PM), John Stultz wrote:
>>> On 10/14/2013 02:33 PM, kosaki.motoh...@gmail.com wrote:
>>>> From: KOSAKI Motohiro
>>>>
>>>> Fedora Ruby
(10/17/13 1:05 PM), John Stultz wrote:
On 10/14/2013 02:33 PM, kosaki.motoh...@gmail.com wrote:
From: KOSAKI Motohiro
Fedora Ruby maintainer reported latest Ruby doesn't work on Fedora Rawhide
on ARM. (http://bugs.ruby-lang.org/issues/9008)
Because of, commit 1c6b39ad3f (alarmtimers: R
From: KOSAKI Motohiro
Fedora Ruby maintainer reported latest Ruby doesn't work on Fedora Rawhide
on ARM. (http://bugs.ruby-lang.org/issues/9008)
Because of, commit 1c6b39ad3f (alarmtimers: Return -ENOTSUPP if no
RTC device is present) intruduced to return ENOTSUPP when
clock_get{time,res}
(10/7/13 11:07 PM), Minchan Kim wrote:
Hi KOSAKI,
On Mon, Oct 07, 2013 at 10:51:18PM -0400, KOSAKI Motohiro wrote:
Maybe, int madvise5(addr, length, MADV_DONTNEED|MADV_LAZY|MADV_SIGBUS,
&purged, &ret);
Another reason to make it hard is that madvise(2) is tight coupled with
w
Maybe, int madvise5(addr, length, MADV_DONTNEED|MADV_LAZY|MADV_SIGBUS,
&purged, &ret);
Another reason to make it hard is that madvise(2) is tight coupled with
with vmas split/merge. It needs mmap_sem's write-side lock and it hurt
anon-vrange test performance much heavily and userland mig
(10/7/13 4:55 PM), Jan Kara wrote:
On Thu 03-10-13 18:40:06, KOSAKI Motohiro wrote:
(10/2/13 3:36 PM), Jan Kara wrote:
On Wed 02-10-13 12:32:33, KOSAKI Motohiro wrote:
(10/2/13 10:27 AM), Jan Kara wrote:
Signed-off-by: Jan Kara
---
mm/process_vm_access.c | 8 ++--
1 file changed, 2
(10/2/13 3:36 PM), Jan Kara wrote:
On Wed 02-10-13 12:32:33, KOSAKI Motohiro wrote:
(10/2/13 10:27 AM), Jan Kara wrote:
Signed-off-by: Jan Kara
---
mm/process_vm_access.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/process_vm_access.c b/mm
(10/2/13 10:27 AM), Jan Kara wrote:
> Signed-off-by: Jan Kara
> ---
> mm/process_vm_access.c | 8 ++--
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c
> index fd26d0433509..c1bc47d8ed90 100644
> --- a/mm/process_vm_access.c
(10/2/13 10:27 AM), Jan Kara wrote:
> Provide a wrapper for get_user_pages() which takes care of acquiring and
> releasing mmap_sem. Using this function reduces amount of places in
> which we deal with mmap_sem.
>
> Signed-off-by: Jan Kara
> ---
> include/linux/mm.h | 14 ++
> 1 fi
(10/1/13 7:31 PM), David Rientjes wrote:
for_each_online_cpu() needs the protection of {get,put}_online_cpus() so
cpu_online_mask doesn't change during the iteration.
Signed-off-by: David Rientjes
Acked-by: KOSAKI Motohiro
--
To unsubscribe from this list: send the line "unsubsc
> +static void seq_print_vma_name(struct seq_file *m, struct vm_area_struct
> *vma)
> +{
> + const char __user *name = vma_get_anon_name(vma);
> + struct mm_struct *mm = vma->vm_mm;
> +
> + unsigned long page_start_vaddr;
> + unsigned long page_offset;
> + unsigned long num_pag
On Mon, Sep 16, 2013 at 8:18 PM, Wanpeng Li wrote:
> Hi KOSAKI,
> On Mon, Sep 16, 2013 at 05:23:32PM -0400, KOSAKI Motohiro wrote:
>>On 9/14/2013 7:45 PM, Wanpeng Li wrote:
>>> Changelog:
>>> *v2 -> v3: revert commit d157a558 directly
>>>
>>>
On Mon, Sep 16, 2013 at 7:41 PM, Wanpeng Li wrote:
> Hi KOSAKI,
> On Mon, Sep 16, 2013 at 04:15:29PM -0400, KOSAKI Motohiro wrote:
>>On 9/14/2013 7:45 PM, Wanpeng Li wrote:
>>> Changelog:
>>> *v2 -> v3: revert commit 46c001a2 directly
>>>
>>
On 9/14/2013 7:45 PM, Wanpeng Li wrote:
> Changelog:
> *v2 -> v3: revert commit d157a558 directly
>
> The VM_UNINITIALIZED/VM_UNLIST flag introduced by commit f5252e00(mm: avoid
> null pointer access in vm_struct via /proc/vmallocinfo) is used to avoid
> accessing the pages field with unallocated
On 9/14/2013 7:45 PM, Wanpeng Li wrote:
> Changelog:
> *v2 -> v3: revert commit 46c001a2 directly
>
> Don't warning twice in __vmalloc_area_node and __vmalloc_node_range if
> __vmalloc_area_node allocation failure. This patch revert commit 46c001a2
> (mm/vmalloc.c: emit the failure message before
On 9/14/2013 7:45 PM, Wanpeng Li wrote:
> Changelog:
> *v1 -> v2: rebase against mmotm tree
>
> The caller address has already been set in set_vmalloc_vm(), there's no need
setup_vmalloc_vm()
> to set it again in __vmalloc_area_node.
>
> Reviewed-by:
epresents vmap_area is being tear
> down in race window mentioned above. This patch fix it by don't dump any
> information for !VM_VM_AREA case and also remove (VM_LAZY_FREE |
> VM_LAZY_FREEING)
> check since they are not possible for !VM_VM_AREA case.
>
> Suggested-by: Joonso
(9/16/13 8:53 AM), Jianguo Wu wrote:
Use more appropriate NUMA_NO_NODE instead of -1
Signed-off-by: Jianguo Wu
---
mm/mempolicy.c | 10 +-
1 files changed, 5 insertions(+), 5 deletions(-)
I think this patch don't make any functional change, right?
Acked-by: KOSAKI Mot
On 9/12/2013 12:45 AM, Suzuki K. Poulose wrote:
> On 09/12/2013 12:57 AM, KOSAKI Motohiro wrote:
>> (9/3/13 4:39 AM), Janani Venkataraman wrote:
>>> Hello,
>>>
>>> We are working on an infrastructure to create a system core file of a
>>> specific
>
x27;r' : '-',
- flags & VM_WRITE ? 'w' : '-',
- flags & VM_EXEC ? 'x' : '-',
- flags & VM_MAYSHARE ? flags & VM_SHARED ? 'S' : 's' : 'p',
- pgoff,
-
(9/3/13 4:39 AM), Janani Venkataraman wrote:
Hello,
We are working on an infrastructure to create a system core file of a specific
process at run-time, non-disruptively. It can also be extended to a case where
a process is able to take a self-core dump.
gcore, an existing utility creates a core
1 - 100 of 495 matches
Mail list logo