On 2/25/07, Pavel Machek <[EMAIL PROTECTED]> wrote:
Hi!
> Currently try_to_freeze_tasks() has to wait until all of the vforked processes
> exit and for this reason every user can make it fail. To fix this problem
> we can introduce the additional process flag PF_FREEZER_SKIP to be used by
task
On 2/25/07, Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
On Sunday, 25 February 2007 11:45, Rafael J. Wysocki wrote:
> Hi,
>
> =
--- linux-2.6.20-mm2.orig/kernel/power/process.c2007-02-22
23:44:04.0 +0100
+++ linux-2.6.20-mm2/kernel/power/process.c 2007-02-23 22:33:11
On 2/25/07, Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
On Sunday, 25 February 2007 15:33, Aneesh Kumar wrote:
> On 2/25/07, Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
> > On Sunday, 25 February 2007 11:45, Rafael J. Wysocki wrote:
> > > Hi,
> > >
> &
On 2/25/07, Aneesh Kumar <[EMAIL PROTECTED]> wrote:
On 2/25/07, Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
> On Sunday, 25 February 2007 15:33, Aneesh Kumar wrote:
> > On 2/25/07, Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
> > > On Sunday, 25 February
On 2/26/07, Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
NOTE: Alternatively, we can just drop flush_signals() from there, but I'm not
sure that's the right thing to do.
---
From: Rafael J. Wysocki <[EMAIL PROTECTED]>
Since call_usermodehelper() calls flush_signals(current), the task that
e
* Only the _current_ task can read/write to tsk->flags, but other
Index: linux-2.6.20-mm2/include/linux/freezer.h
===
--- linux-2.6.20-mm2.orig/include/linux/freezer.h 2007-02-26
08:40:22.0 +0100
+++ linux-2.6.20-mm2/
On 2/28/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
> On Fri, 23 Feb 2007 21:10:36 +0530 "Aneesh Kumar K.V" <[EMAIL PROTECTED]>
wrote:
> From: Aneesh Kumar K.V <[EMAIL PROTECTED]>
>
> Signed-off-by: Aneesh Kumar K.V <[EMAIL PROTECTED]>
> ---
>
On 12/11/06, Mauricio Lin <[EMAIL PROTECTED]> wrote:
Hi Aneesh,
I have posted a patch for that as well. You can check it at
http://lkml.org/lkml/2006/11/30/315.
Changes i posted was with respect to a latest kernel and also had some
more failure case properly returning error. So i picked my d
This is about commit 5d6f647fc6bb57377c9f417c4752e43189f56bb1.
Why is this change needed. As far as i understand from the
the commit message distro used to set sysrq_enabled = 0.
But they if we need sysrq support we can set it using sysctl
why do we need a kernel command line option ?
-aneesh
-
T
Gerald Schaefer writes:
> The thp page table pre-allocation code currently assumes that pgtable_t
> is of type "struct page *". This may not be true for all architectures,
> so this patch removes that assumption by replacing the functions
> prepare_pmd_huge_pte() and get_pmd_huge_pte() with two n
Namjae Jeon writes:
> From: Namjae Jeon
>
> This patch is based on suggestion by Wu Fengguang:
> https://lkml.org/lkml/2011/8/19/19
>
> kernel has mechanism to do writeback as per dirty_ratio and dirty_background
> ratio. It also maintains per task dirty rate limit to keep balance of
> dirty pag
On Sun, Feb 03, 2008 at 01:39:02PM +0100, Geert Uytterhoeven wrote:
> On Sun, 3 Feb 2008, Heiko Carstens wrote:
> > On Fri, Feb 01, 2008 at 10:04:04PM +0100, Bastian Blank wrote:
> > > On Fri, Feb 01, 2008 at 12:22:57PM -0800, Andrew Morton wrote:
> > > > On Fri, 1 Feb 2008 21:02:08 +0100
> > > > B
On Fri, Feb 01, 2008 at 09:02:40PM +0100, Bastian Blank wrote:
> Fix ext4 bitops.
>
> Signed-off-by: Bastian Blank <[EMAIL PROTECTED]>
>
> diff --git a/include/asm-powerpc/bitops.h b/include/asm-powerpc/bitops.h
> index 220d9a7..d0980df 100644
> --- a/include/asm-powerpc/bitops.h
> +++ b/include/
On Mon, Feb 04, 2008 at 10:24:36AM +0100, Heiko Carstens wrote:
> > > > > | fs/ext4/mballoc.c: In function 'ext4_mb_generate_buddy':
> > > > > | fs/ext4/mballoc.c:954: error: implicit declaration of function
> > > > > 'generic_find_next_le_bit'
> > > > >
> > > > > The s390 specific bitops uses pa
David Miller writes:
> We've split up the PTE tables so that they take up half a page instead
> of a full page. This is in order to facilitate transparent huge page
> support, which works much better if our PMDs cover 4MB instead of 8MB.
>
> What we do is have a one-behind cache for PTE table al
n for the patch, it should be reverted
> to preserve hugetlb page sharing locking.
>
I guess we want to take this patch as a revert patch rather than
dropping the one in -mm. That would help in documenting the i_mmap_mutex
locking details in commit message. Or may be we should add necessary
c
Joonsoo Kim writes:
> On Thu, Aug 22, 2013 at 12:38:12PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim writes:
>>
>> > Hello, Aneesh.
>> >
>> > First of all, thank you for review!
>> >
>> > On Wed, Aug 21, 2013 at 02:58:2
> - return NULL;
This hunk would be much easier if you were changing.
if (!vma_has_reserves(vma) &&
h->free_huge_pages - h->resv_huge_pages == 0)
goto err;
ie, !vma_has_reserves(vma) == !use_reserve.
So may be a patch re
Joonsoo Kim writes:
> In order to validate that this failure is reasonable, we need to know
> whether allocation request is for reserved or not on caller function.
> So moving vma_needs_reservation() up to the caller of alloc_huge_page().
> There is no functional change in this patch and followin
Joonsoo Kim writes:
> Now, alloc_huge_page() only return -ENOSPEC if failed.
> So, we don't worry about other return value.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index bc666cf..24de2ca 100644
&g
the comment I had with the previous patch
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 24de2ca..2372f75 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2499,7 +2499,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct
> vm_area_struct
"Aneesh Kumar K.V" writes:
> Joonsoo Kim writes:
>
>> In order to validate that this failure is reasonable, we need to know
>> whether allocation request is for reserved or not on caller function.
>> So moving vma_needs_reservation() up to the caller of
Joonsoo Kim writes:
> If we fail with a allocated hugepage, we need some effort to recover
> properly. So, it is better not to allocate a hugepage as much as possible.
> So move up anon_vma_prepare() which can be failed in OOM situation.
>
> Signed-off-by: Joonsoo Kim
Reviewed-b
Joonsoo Kim writes:
> Current code include 'Caller expects lock to be held' in every error path.
> We can clean-up it as we do error handling in one place.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetl
Eric Van Hensbergen writes:
> The following changes since commit 2315cb14010c4cb0eb7c1d19fcf90475e4688207:
>
> 9p: Add rest of 9p files to MAINTAINERS entry (2013-05-28 13:47:58 -0500)
>
> are available in the git repository at:
>
>
> git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs
Naoya Horiguchi writes:
> Hi,
>
> Kirill posted split_ptl patchset for thp today, so in this version
> I post only hugetlbfs part. I added Kconfig variables in following
> Kirill's patches (although without CONFIG_SPLIT_*_PTLOCK_CPUS.)
>
> This patch changes many lines, but all are in hugetlbfs s
Naoya Horiguchi writes:
> Currently all of page table handling by hugetlbfs code are done under
> mm->page_table_lock. So when a process have many threads and they heavily
> access to the memory, lock contention happens and impacts the performance.
>
> This patch makes hugepage support split page
Naoya Horiguchi writes:
> Hi Aneesh,
>
> On Wed, Sep 04, 2013 at 12:43:19PM +0530, Aneesh Kumar K.V wrote:
>> Naoya Horiguchi writes:
>>
>> > Currently all of page table handling by hugetlbfs code are done under
>> > mm->page_table_lock. So when a
Naoya Horiguchi writes:
> Currently all of page table handling by hugetlbfs code are done under
> mm->page_table_lock. So when a process have many threads and they heavily
> access to the memory, lock contention happens and impacts the performance.
>
> This patch makes hugepage support split page
"Kirill A. Shutemov" writes:
> Naoya Horiguchi wrote:
>> Thp related code also uses per process mm->page_table_lock now.
>> So making it fine-grained can provide better performance.
>>
>> This patch makes thp support split page table lock by using page->ptl
>> of the pages storing "pmd_trans_hug
Naoya Horiguchi writes:
> On Thu, Dec 06, 2012 at 02:36:52PM -0800, Andrew Morton wrote:
>> On Wed, 5 Dec 2012 16:47:36 -0500
>> Naoya Horiguchi wrote:
>>
>> > This patch fixes the warning from __list_del_entry() which is triggered
>> > when a process tries to do free_huge_page() for a hwpoiso
Naoya Horiguchi writes:
> On Fri, Dec 07, 2012 at 11:06:41AM +0530, Aneesh Kumar K.V wrote:
> ...
>> > From: Naoya Horiguchi
>> > Date: Thu, 6 Dec 2012 20:54:30 -0500
>> > Subject: [PATCH v2] HWPOISON, hugetlbfs: fix warning on freeing hwpoisoned
>> >
Naoya Horiguchi writes:
> Currently migrate_huge_page() takes a pointer to a hugepage to be
> migrated as an argument, instead of taking a pointer to the list of
> hugepages to be migrated. This behavior was introduced in commit
> 189ebff28 ("hugetlb: simplify migrate_huge_page()"), and was OK
>
Naoya Horiguchi writes:
> +/* Returns true for head pages of in-use hugepages, otherwise returns false.
> */
> +bool is_hugepage_movable(struct page *hpage)
> +{
> + struct page *page;
> + struct hstate *h;
> + bool ret = false;
> +
> + VM_BUG_ON(!PageHuge(hpage));
> + /*
> +
Joonsoo Kim writes:
> Currently, we use a page with mapped count 1 in page cache for cow
> optimization. If we find this condition, we don't allocate a new
> page and copy contents. Instead, we map this page directly.
> This may introduce a problem that writting to private mapping overwrite
> hug
Joonsoo Kim writes:
> We don't need to proceede the processing if we don't have any usable
> free huge page. So move this code up.
I guess you can also mention that since we are holding hugetlb_lock
hstate values can't change.
Also.
>
> Signed-off-by: Joonsoo Kim
>
> diff --git a/mm/hugetlb.
Joonsoo Kim writes:
> The name of the mutex written in comment is wrong.
> Fix it.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d87f70b..d21a33a 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
&
Joonsoo Kim writes:
> First 5 patches are almost trivial clean-up patches.
>
> The others are for fixing three bugs.
> Perhaps, these problems are minor, because this codes are used
> for a long time, and there is no bug reporting for these problems.
>
> These patches are based on v3.10.0 and
> p
Joonsoo Kim writes:
> Current node iteration code have a minor problem which do one more
> node rotation if we can't succeed to allocate. For example,
> if we start to allocate at node 0, we stop to iterate at node 0.
> Then we start to allocate at node 1 for next allocation.
Can you explain the
Joonsoo Kim writes:
> If list is empty, list_for_each_entry_safe() doesn't do anything.
> So, this check is redundant. Remove it.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a838e6b..d4a1695 1
ess.
> You'll find a mentioned problem.
>
> Solution is simple. We should check VM_NORESERVE in vma_has_reserves().
> This prevent to use a pre-allocated huge page if free count is under
> the reserve count.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
mbed it into
> dequeue_huge_page_vma() directly. This patch implement it.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f6a7a4e..ed2d0af 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -434,25 +434
Joonsoo Kim writes:
> If we map the region with MAP_NORESERVE and MAP_SHARED,
> we can skip to check reserve counting and eventually we cannot be ensured
> to allocate a huge page in fault time.
> With following example code, you can easily find this situation.
>
> Assume 2MB, nr_hugepages = 100
flag = MAP_SHARED | MAP_NORESERVE;
> q = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0);
> if (q == MAP_FAILED) {
> fprintf(stderr, "mmap() failed: %s\n", strerror(errno));
> }
> q[0] = 'c';
>
> This
Joonsoo Kim writes:
> On Mon, Jul 15, 2013 at 07:31:33PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim writes:
>>
>> > We don't need to proceede the processing if we don't have any usable
>> > free huge page. So move this code up.
>>
>>
Joonsoo Kim writes:
> On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim writes:
>>
>> > If we map the region with MAP_NORESERVE and MAP_SHARED,
>> > we can skip to check reserve counting and eventually we cannot be ensured
>
Cyrill Gorcunov writes:
> On Mon, Jul 29, 2013 at 06:08:55PM +0400, Pavel Emelyanov wrote:
>> >
>> > - if (!pte_none(*pte))
>> > + ptfile = pgoff_to_pte(pgoff);
>> > +
>> > + if (!pte_none(*pte)) {
>> > +#ifdef CONFIG_MEM_SOFT_DIRTY
>> > + if (pte_present(*pte) &&
>> > +
ve pool when soft offlining a huge
page. Check we have free pages outside the reserve pool before we
dequeue the huge page
Reviewed-by: Aneesh Kumar
>
> Signed-off-by: Joonsoo Kim
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6782b41..d971233 100644
> --- a/mm/hugetlb.
Joonsoo Kim writes:
> 'reservations' is so long name as a variable and we use 'resv_map'
> to represent 'struct resv_map' in other place. To reduce confusion and
> unreadability, change it.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar
Joonsoo Kim writes:
> Currently, to track a reserved and allocated region, we use two different
> ways for MAP_SHARED and MAP_PRIVATE. For MAP_SHARED, we use
> address_mapping's private_list and, for MAP_PRIVATE, we use a resv_map.
> Now, we are preparing to change a coarse grained lock which pro
Joonsoo Kim writes:
> vma_need_reservation() can be substituted by vma_has_reserves()
> with minor change. These function do almost same thing,
> so unifying them is better to maintain.
I found the resulting code confusing and complex. I am sure there is
more that what is explained in the commit
Naoya Horiguchi writes:
> Before enabling each user of page migration to support hugepage,
> this patch enables the list of pages for migration to link not only
> LRU pages, but also hugepages. As a result, putback_movable_pages()
> and migrate_pages() can handle both of LRU pages and hugepages.
Naoya Horiguchi writes:
> Now hugepages are definitely movable. So allocating hugepages from
> ZONE_MOVABLE is natural and we have no reason to keep this parameter.
> In order to allow userspace to prepare for the removal, let's leave
> this sysctl handler as noop for a while.
I guess you still
Naoya Horiguchi writes:
> On Wed, Jul 31, 2013 at 12:02:30AM +0530, Aneesh Kumar K.V wrote:
>> Naoya Horiguchi writes:
>>
>> > Now hugepages are definitely movable. So allocating hugepages from
>> > ZONE_MOVABLE is natural and we have no reason to keep this
Hillf Danton writes:
> On Wed, Jul 31, 2013 at 2:37 PM, Joonsoo Kim wrote:
>> On Wed, Jul 31, 2013 at 02:21:38PM +0800, Hillf Danton wrote:
>>> On Wed, Jul 31, 2013 at 12:41 PM, Joonsoo Kim
>>> wrote:
>>> > On Wed, Jul 31, 2013 at 10:49:24AM +0800, Hillf Danton wrote:
>>> >> On Wed, Jul 31, 20
27;t cause any behavior differences.
>
> Signed-off-by: Tejun Heo
> Cc: Aneesh Kumar K.V
> Cc: KAMEZAWA Hiroyuki
> Cc: Michal Hocko
> Cc: Johannes Weiner
Reviewed-by: Aneesh Kumar K.V
> ---
> mm/hugetlb_cgroup.c | 22 --
> 1 file changed, 12
Hillf Danton writes:
> On Fri, Aug 2, 2013 at 12:17 AM, Aneesh Kumar K.V
> wrote:
>> Hillf Danton writes:
>>
...
>>>>> >> >> Well, why is it illegal to use reserved page here?
>>>>> >> >
>>>>> >
Joonsoo Kim writes:
> If we alloc hugepage with avoid_reserve, we don't dequeue reserved one.
> So, we should check subpool counter when avoid_reserve.
> This patch implement it.
Can you explain this better ? ie, if we don't have a reservation in the
area chg != 0. So why look at avoid_reserve.
Joonsoo Kim writes:
> We don't need to grab a page_table_lock when we try to release a page.
> So, defer to grab a page_table_lock.
>
> Reviewed-by: Naoya Horiguchi
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm
Joonsoo Kim writes:
> is_vma_resv_set(vma, HPAGE_RESV_OWNER) implys that this mapping is
> for private. So we don't need to check whether this mapping is for
> shared or not.
>
> This patch is just for clean-up.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar
Joonsoo Kim writes:
> If we fail with a reserved page, just calling put_page() is not sufficient,
> because put_page() invoke free_huge_page() at last step and it doesn't
> know whether a page comes from a reserved pool or not. So it doesn't do
> anything related to reserved count. This makes res
rained lock which protect
> a region structure to fine grained lock, and this difference hinder it.
> So, before changing it, unify region structure handling.
>
> Signed-off-by: Joonsoo Kim
As mentioned earlier kref_put is confusing because we always have
reference count == 1 , otherwi
ned-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8751e2c..d9cabf6 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -150,8 +150,9 @@ struct file_region {
> long to;
> };
>
> -static long region_
Joonsoo Kim writes:
> There is a race condition if we map a same file on different processes.
> Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
> When we do mmap, we don't grab a hugetlb_instantiation_mutex, but,
> grab a mmap_sem. This doesn't prevent other process to m
Joonsoo Kim writes:
> Currently, to track a reserved and allocated region, we use two different
> ways for MAP_SHARED and MAP_PRIVATE. For MAP_SHARED, we use
> address_mapping's private_list and, for MAP_PRIVATE, we use a resv_map.
> Now, we are preparing to change a coarse grained lock which pro
ce to get have another function that will return resv_map
only if we have HPAGE_RESV_OWNER. So that we could use that in
hugetlb_vm_op_open/close. ? Otherwise
Reviewed-by: Aneesh Kumar K.V
> +
> static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
> {
>
perate functions to return vma_resv_map for
HPAGE_RESV_OWNER and one for put ? That way we could have something like
resv_map_hpage_resv_owner_get()
resv_map_hpge_resv_put()
Reviewed-by: Aneesh Kumar K.V
> -
> static void hugetlb_vm_op_close(struct vm_area_struct *vma)
> {
&
Joonsoo Kim writes:
> Hello, Aneesh.
>
> First of all, thank you for review!
>
> On Wed, Aug 21, 2013 at 02:58:20PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim writes:
>>
>> > If we alloc hugepage with avoid_reserve, we don't dequeue reserved one.
eserve pool. This definition
> is perfectly same as vma_has_reserves(), so remove vma_has_reserves().
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e6c0c77..22ceb04 100644
> --- a/mm/hugetlb.c
> +++ b
Joonsoo Kim writes:
> On Thu, Aug 22, 2013 at 02:14:38PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim writes:
>>
>> > vma_has_reserves() can be substituted by using return value of
>> > vma_needs_reservation(). If chg returned by vma_needs_reservation()
Naoya Horiguchi writes:
>>
>> Considering that we have architectures that won't support migrating
>> explicit hugepages with this patch series, is it ok to use
>> GFP_HIGHUSER_MOVABLE for hugepage allocation ?
>
> Originally this parameter was introduced to make hugepage pool on
> ZONE_MOVABLE
Naoya Horiguchi writes:
> This patch is motivated by the discussion with Aneesh about "extend
> hugepage migration" patchset.
> http://thread.gmane.org/gmane.linux.kernel.mm/103933/focus=104391
> I'll append this to the patchset in the next post, but before that
> I want this patch to be review
Minchan Kim writes:
> Ccing people get_maintainer says.
>
> On Wed, Jul 17, 2013 at 11:32:23AM -0400, Dave Jones wrote:
>> [128095.470960] =
>> [128095.471315] [ INFO: inconsistent lock state ]
>> [128095.471660] 3.11.0-rc1+ #9 Not tainted
>> [128095.472156] --
Hillf Danton writes:
> On Fri, Jul 19, 2013 at 1:42 AM, Aneesh Kumar K.V
> wrote:
>> Minchan Kim writes:
>>> IMHO, it's a false positive because i_mmap_mutex was held by kswapd
>>> while one in the middle of fault path could be never on kswapd context.
>&g
Joonsoo Kim writes:
> First 6 patches are almost trivial clean-up patches.
>
> The others are for fixing three bugs.
> Perhaps, these problems are minor, because this codes are used
> for a long time, and there is no bug reporting for these problems.
>
> These patches are based on v3.10.0 and
> p
Joonsoo Kim writes:
> In this time we are holding a hugetlb_lock, so hstate values can't
> be changed. If we don't have any usable free huge page in this time,
> we don't need to proceede the processing. So move this code up.
>
> Signed-off-by: Joonsoo Kim
t;
> I introduce new macros "for_each_node_mask_to_[alloc|free]" and
> fix and clean-up node iteration code to alloc or free.
> This makes code more understandable.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
>
> diff --git a/mm/hugetlb.c b/mm/h
Michal Hocko writes:
> On Mon 22-07-13 17:36:26, Joonsoo Kim wrote:
>> Current node iteration code have a minor problem which do one more
>> node rotation if we can't succeed to allocate. For example,
>> if we start to allocate at node 0, we stop to iterate at node 0.
>> Then we start to allocate
Andy Lutomirski writes:
> The change:
>
> commit f4e0c30c191f87851c4a53454abb55ee276f4a7e
> Author: Al Viro
> Date: Tue Jun 11 08:34:36 2013 +0400
>
> allow the temp files created by open() to be linked to
>
> O_TMPFILE | O_CREAT => linkat() with AT_SYMLINK_FOLLOW and
> /proc/self/fd/
Alex Thorlton writes:
> This patch implements functionality to allow processes to disable the use of
> transparent hugepages through the prctl syscall.
>
> We've determined that some jobs perform significantly better with thp
> disabled,
> and we needed a way to control thp on a per-process basi
Naoya Horiguchi writes:
> Now we have extended hugepage migration and it's opened to many users
> of page migration, which is a good reason to consider hugepage as movable.
> So we can go to the direction to remove this parameter. In order to
> allow userspace to prepare for the removal, let's le
Naoya Horiguchi writes:
> Currently hugepage migration works well only for pmd-based hugepages
> (mainly due to lack of testing,) so we had better not enable migration
> of other levels of hugepages until we are ready for it.
>
> Some users of hugepage migration (mbind, move_pages, and migrate_pa
Andy Lutomirski writes:
> On Sun, Aug 11, 2013 at 9:45 AM, Aneesh Kumar K.V
> wrote:
>> Andy Lutomirski writes:
>>
>>> The change:
>>>
>>> commit f4e0c30c191f87851c4a53454abb55ee276f4a7e
>>> Author: Al Viro
>>> Date: Tue Jun 11
"Aneesh Kumar K.V" writes:
> Andy Lutomirski writes:
>
>> On Sun, Aug 11, 2013 at 9:45 AM, Aneesh Kumar K.V
>> wrote:
>>> Andy Lutomirski writes:
>>>
>>>> The change:
>>>>
>>>> commit f4e0c30c191f87851c4a5345
From: "Aneesh Kumar K.V"
If we dont' specify a protocol version default to 9P2000.L. 9P2000.L
have better support for posix semantic and is where all the recent development
is happening.
Signed-off-by: Aneesh Kumar K.V
---
net/9p/client.c | 2 +-
1 file changed, 1 insertion
From: "Aneesh Kumar K.V"
Make the default 9p experience better by defaulting to virtio transport if
present.
These days most of the users are using 9p in a virtualized setup
Signed-off-by: Aneesh Kumar K.V
---
net/9p/client.c | 3 +++
1 file changed, 3 insertions(+)
diff --gi
From: "Aneesh Kumar K.V"
For zero copy request, error will be encoded in the user space buffer.
So copy the error code correctly using copy_from_user. Here we use the
extra bytes we allocate for zero copy request. If total error details
are more than P9_ZC_HDR_SZ - 7 bytes, we retu
Michal Hocko writes:
> On Tue 26-03-13 16:59:40, Aneesh Kumar K.V wrote:
>> Naoya Horiguchi writes:
> [...]
>> > diff --git v3.9-rc3.orig/mm/memory-failure.c v3.9-rc3/mm/memory-failure.c
>> > index df0694c..4e01082 100644
>> > --- v3.9-rc3.orig/mm/memor
Michal Hocko writes:
> On Tue 26-03-13 16:59:40, Aneesh Kumar K.V wrote:
>> Naoya Horiguchi writes:
> [...]
>> > diff --git v3.9-rc3.orig/mm/memory-failure.c v3.9-rc3/mm/memory-failure.c
>> > index df0694c..4e01082 100644
>> > --- v3.9-rc3.orig/mm/memor
Pavel Emelyanov writes:
> On 08/21/2012 02:42 PM, Aneesh Kumar K.V wrote:
>> Pavel Emelyanov writes:
>>
>>> On 08/20/2012 11:32 PM, J. Bruce Fields wrote:
>>>> On Mon, Aug 20, 2012 at 11:06:06PM +0400, Cyrill Gorcunov wrote:
>>>>> On Mon, Aug
xtra cur in the conversion.
Right changes attached.
ext4: Convert list_for_each_rcu() to list_for_each_entry_rcu()
From: Aneesh Kumar K.V <[EMAIL PROTECTED]>
The list_for_each_entry_rcu() primitive should be used instead of
list_for_each_rcu(), as the former is easier to use and pr
On Tue, Feb 05, 2008 at 10:36:23PM +0100, Miklos Szeredi wrote:
> From: Miklos Szeredi <[EMAIL PROTECTED]>
>
> Add the following:
>
> /proc/sys/fs/types/${FS_TYPE}/usermount_safe
>
There is /proc/fs// already. Since it is file system specific
shouldn't it go there ?
-aneesh
--
To unsubscri
Cyrill Gorcunov writes:
> To provide fsnotify object inodes being watched without
> binding to alphabetical path we need to encode them with
> exportfs help. This patch adds a helper which operates
> with plain inodes directly.
doesn't name_to_handle_at() work for you ? It also allows to get a
l Gorcunov wrote:
>>>>> On Mon, Aug 20, 2012 at 07:49:23PM +0530, Aneesh Kumar K.V wrote:
>>>>>> Cyrill Gorcunov writes:
>>>>>>
>>>>>>> To provide fsnotify object inodes being watched without
>>>>>>> binding t
;
> Signed-off-by: Wanpeng Li
Reviewed-by: Aneesh Kumar K.V
> ---
> mm/hugetlb_cgroup.c |3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> index b834e8d..2b9e214 100644
> --- a/mm/hug
Wanpeng Li writes:
> From: Wanpeng Li
>
> hugepage_activelist is used to track currently used HugeTLB pages.
> We can find the in-use HugeTLB pages to support HugeTLB cgroup
> removal. Don't keep unused page in hugetlb_activelist too long.
> Otherwise, on cgroup removal we should update the unus
Wanpeng Li writes:
> On Wed, Jul 11, 2012 at 02:02:23PM +0530, Aneesh Kumar K.V wrote:
>>Wanpeng Li writes:
>>
>>> From: Wanpeng Li
>>>
>>> hugepage_activelist is used to track currently used HugeTLB pages.
>>> We can find the in-use HugeTLB
u.next for storing cgoup details.
> + */
> + if (h->order >= HUGETLB_CGROUP_MIN_ORDER)
> + __hugetlb_cgroup_file_init(idx);
Is it better to say ?
if (huge_page_order(h) >= HUGETLB_CGROUP_MIN_ORDER)
uffer(bh);
> > + if (buffer_uptodate(bh))
> > + return 0;
>
> Here it will unlock the buffer and return zero.
>
> This function is unusable when passed an unlocked buffer.
>
Updated patch below.
commit 70d4ca32604e0935a8b9a49c5ac8b9c64c810693
Author:
xt4_lblk_t first_block, last_block;
> + ext4_fsblk_t first_pblock, last_pblock;
> +};
>
Updated patch
commit c4786b67cdc5b24d2548a69b62774fb54f8f1575
Author: Aneesh Kumar K.V <[EMAIL PROTECTED]>
Date: Tue Jan 22 09:28:55 2008 +0530
ext4: Add EXT4_IOC_MIGRATE ioc
1 - 100 of 838 matches
Mail list logo