From: Huang Ying <ying.hu...@intel.com>
To make the comments consistent with the already changed code.
Signed-off-by: "Huang, Ying" <ying.hu...@intel.com>
---
mm/huge_memory.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/
Minchan Kim <minc...@kernel.org> writes:
> On Mon, Jun 13, 2016 at 05:02:15PM +0800, Huang, Ying wrote:
>> Linus Torvalds <torva...@linux-foundation.org> writes:
>>
>> > On Sat, Jun 11, 2016 at 5:49 PM, Huang, Ying <ying.hu...@intel.com> wrote:
>
"Kirill A. Shutemov" <kir...@shutemov.name> writes:
> On Tue, Jun 14, 2016 at 05:57:28PM +0900, Minchan Kim wrote:
>> On Wed, Jun 08, 2016 at 11:58:11AM +0300, Kirill A. Shutemov wrote:
>> > On Wed, Jun 08, 2016 at 04:41:37PM +0800, Huang, Ying wrote:
>>
"Huang, Ying" <ying.hu...@intel.com> writes:
> From: Huang Ying <ying.hu...@intel.com>
>
> madvise_free_huge_pmd should return 0 if the fallback PTE operations are
> required. In madvise_free_huge_pmd, if part pages of THP are discarded,
> the THP will be spl
From: Huang Ying <ying.hu...@intel.com>
madvise_free_huge_pmd should return 0 if the fallback PTE operations are
required. In madvise_free_huge_pmd, if part pages of THP are discarded,
the THP will be split and fallback PTE operations should be used if
splitting succeeds. But the origina
Minchan Kim <minc...@kernel.org> writes:
> Hi,
>
> On Thu, Jun 16, 2016 at 08:03:54PM -0700, Huang, Ying wrote:
>> From: Huang Ying <ying.hu...@intel.com>
>>
>> madvise_free_huge_pmd should return 0 if the fallback PTE operations are
>> required. I
the
> -fix patch applied during this testing?
The original test is for linux-next tree of 2016-02-03, which hasn't the
fix patch. The fix patch is merged by linux-next 2016-02-04. I queued
the same test for the fix patch and there is no such bug as above.
Best Regards,
Huang, Ying
ement comes from the decreased the cache
miss rate, because of the decreased working set, which comes from the
decreased pipe size, as follow,
7.478e+09 ± 4% -54.8% 3.38e+09 ± 36% perf-stat.LLC-load-misses
2.087e+11 ± 3% -3.6% 2.012e+11 ± 37% perf-stat.LLC-
rc1 6ffc77f48b85ed9ab9a7b2754a
>> --
>> %stddev %change %stddev
>> \ |\
>> 383.55 . 1% +15.8% 444.19 . 0% aim9.shell_rtns_2.ops_per_sec
>
> That means it is
d there show no regression.
So I think the patch keep the optimistic spinning. Test result
details will be in the next email.
Best Regards,
Huang, YIng
huang ying <huang.ying.cari...@gmail.com> writes:
> On Sat, Jan 30, 2016 at 9:18 AM, Ding Tianhong <dingtianh...@huawei.com>
> wrote:
>> On 2016/1/29 17:53, Peter Zijlstra wrote:
>>> On Sun, Jan 24, 2016 at 04:03:50PM +0800, Ding Tianhong wrote:
>
t; reschedule. So although the
> chance of starvation is reduced, this patch doesn't fully address the
> issue of waiter starvation.
Could you share your workload? I want to reproduce it in 0day/LKP+ environment.
Best Regards,
Huang, Ying
O O O |
>> 40 ++ OO O O |
>> | |
>> 35 ++ |
>> | |
>> 30 ++ |
>> | |
>> |* |
>> 25 ++ + +|
>> *.*.*.*..*.* *.*.*..*.*.*.*.*.*.*..*.*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*
>> 20 ++-+
>>
>>
>> [*] bisect-good sample
>> [O] bisect-bad sample
>>
>> To reproduce:
>>
>> git clone
>> git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
>> cd lkp-tests
>> bin/lkp install job.yaml # job file is attached in this email
>> bin/lkp run job.yaml
>>
>>
>> Disclaimer:
>> Results have been estimated based on internal Intel analysis and are provided
>> for informational purposes only. Any difference in system hardware or
>> software
>> design or configuration may affect actual performance.
>>
>>
>> Thanks,
>> Ying Huang
>
> Thanks...
>
> Huh...I'm stumped on this one. If anything I would have expected better
> performance with this patch since we don't even take the file_lock or
> do the fcheck in the F_UNLCK codepath now, or when there is an error.
>
> I'll see if I can reproduce it on my own test rig, but I'd welcome
> ideas of where and how this performance regression could have crept in.
This is a performance increase instead of performance regression.
Best Regards,
Huang, Ying
that.
Best Regards,
Huang, Ying
> ---
> From bf51ab83e9a71cefa9b07336902f9b30931bda19 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <h...@lst.de>
> Date: Fri, 26 Feb 2016 11:02:14 +0100
> Subject: configfs: switch ->default groups to a linked list
>
> Replace th
Linus Walleij <linus.wall...@linaro.org> writes:
> On Mon, Feb 15, 2016 at 3:39 AM, Huang, Ying <ying.hu...@intel.com> wrote:
>> Michael Welling <mwell...@ieee.org> writes:
>
>>> Could you run cat /proc/devices?
>>
>> Sorry, the test mechanism
Sudip Mukherjee <sudipm.mukher...@gmail.com> writes:
> On Wed, Jan 20, 2016 at 01:00:40PM +0800, Huang, Ying wrote:
>> Sudip Mukherjee <sudipm.mukher...@gmail.com> writes:
>>
>> > On Wed, Jan 20, 2016 at 08:44:37AM +0800, kernel test robot wrote:
>
2.87 ± 7% +42.6% 4.09 ± 1% -0.3% 2.86 ± 2%
perf-profile.cycles-pp.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read.sys_pread64
6.68 ± 2% -7.3% 6.19 ± 1% -6.7% 6.23 ± 1%
perf-profile.cycles-pp.unlock_page.shmem_file_read_iter.__vfs_read.vfs_read.sys_pread64
Best Regards,
Huang, Ying
Dan Carpenter <dan.carpen...@oracle.com> writes:
> On Tue, Jan 26, 2016 at 08:32:48AM +0800, Huang, Ying wrote:
>> Dan Carpenter <dan.carpen...@oracle.com> writes:
>>
>> > On Mon, Jan 25, 2016 at 03:13:21PM +0530, Sudip Mukherjee wrote:
>> >> Apar
KML.
Sure. But what is the best way to find the subsystem list for a patch?
Now we use author, committer, and the xxx-by: and Cc: list in patch to
find the recipient.
Best Regards,
Huang, Ying
is does is hurt lkml-searchability.
Sorry, I don't understand this. You could still search the original
patch. Could you explain a little?
Best Regards,
Huang, Ying
> Thanks,
> Davidlohr
> ___
> LKP mailing list
> l...@lists.01.org
> https://lists.01.org/mailman/listinfo/lkp
Sudip Mukherjee <sudipm.mukher...@gmail.com> writes:
> Hi Huang, Ying,
> On Thu, Jan 21, 2016 at 11:36:52AM +0530, Sudip Mukherjee wrote:
>> On Thu, Jan 21, 2016 at 01:47:10PM +0800, Huang, Ying wrote:
>> > Sudip Mukherjee <sudipm.mukher...@gmail.com> writes:
d-yocto-x86_64-62 -serial
> file:/dev/shm/kboot/serial-vm-kbuild-yocto-x86_64-62 -daemonize
> -display none -monitor null
>>
>
> Hello Ying,
>
> Thanks for the report. Is this a clean build?
It's not clean build.
> I cannot reproduce the
> issue with the attached config. Do you have the vmlinux file available
> for inspection?
Sorry, we have no vmlinux file available.
BTW, we use gcc-5 to compile kernel. Do you use gcc-5 or gcc-4?
Best Regards,
Huang, Ying
> Thanks,
> Ard.
> ___
> LKP mailing list
> l...@lists.01.org
> https://lists.01.org/mailman/listinfo/lkp
ocumented to be using the range:
> https://www.kernel.org/doc/Documentation/devices.txt
>
> Could you run cat /proc/devices?
>
Sorry, the test mechanism is not flexible enough to run some shell
command in test system. Could you provide a specialized debug kernel to
dump the necessary information in kernel log? We can collect dmesg
easily.
Best Regards,
Huang, Ying
dful of times upstream. Here is one of the
> previous
> reports: https://lkml.org/lkml/2013/8/11/149
>
> This series resolves the reported leaks and kmemleak is clean with these
> patches applied.
>
> Joshua Hunt (2):
> ACPI, APEI: Fix leaked resources
> ACPI, APEI, ERST: Fixed leaked resources in erst_init
>
> drivers/acpi/apei/apei-base.c |5 +
> drivers/acpi/apei/erst.c |2 ++
> 2 files changed, 7 insertions(+)
Do you have time to take a look at this patchset?
Best Regards,
Huang, Ying
.
Your input are valuable for us. We are discussing how to improve our
reporting to be helpful for kernel developers. Will go back to you soon
on this.
Best Regards,
Huang, Ying
> Thanks,
>
> Ingo
>
idfile
/dev/shm/kboot/pid-vm-kbuild-4G-3 -serial
file:/dev/shm/kboot/serial-vm-kbuild-4G-3 -daemonize -display none -monitor
null
This is the qemu command line we used for testing.
Best Regards,
Huang, Ying
> FWIW, see below for hopefully cleaner fix (will fold once I manage to trigg
> the patch I queued yesterday, so today's tree should work fine.
> So, let me know if this is still seen with the upcoming tree.
Sure.
Best Regards,
Huang, Ying
Darren Hart <dvh...@infradead.org> writes:
> On Mon, Mar 21, 2016 at 04:42:47PM +0800, Huang, Ying wrote:
>> Thomas Gleixner <t...@linutronix.de> writes:
>>
>> > On Mon, 21 Mar 2016, Huang, Ying wrote:
>> >> > FYI, we
u32 error_severity;
u16 revision;
u8 validation_bits;
u8 flags;
u32 error_data_length;
u8 fru_id[16];
u8 fru_text[20];
};
If section_type was defined as uuid_le instead of u8[16], the
uuid_le_cmp usage would look better. So I suggest to use uuid_
On Fri, Apr 8, 2016 at 6:00 PM, Andy Shevchenko
<andriy.shevche...@linux.intel.com> wrote:
> On Fri, 2016-04-08 at 09:27 +0800, Huang, Ying wrote:
>> Andy Shevchenko <andriy.shevche...@linux.intel.com> writes:
>>
>> >
>> > On Fri, 201
he change and help for root causing the
changes. But we know, we are still far from there, so your input is
important for us to improve our test report. Which part of the report
do you think
Hi, Thomas,
Thanks a lot for your valuable input!
Thomas Gleixner <t...@linutronix.de> writes:
> On Fri, 18 Mar 2016, Huang, Ying wrote:
>> Usually we will put most important change we think in the subject of the
>> mail, for this email, it is,
>>
>> +25
Thomas Gleixner <t...@linutronix.de> writes:
> On Mon, 21 Mar 2016, Huang, Ying wrote:
>> > FYI, we noticed 25.6% performance improvement due to commit
>> >
>> >65d8fc777f6d "futex: Remove requirement for lock_page() in
>> > get_futex_key(
le
/dev/shm/kboot/pid-vm-client3-openwrt-ia32-2 -serial
file:/dev/shm/kboot/serial-vm-client3-openwrt-ia32-2 -daemonize -display none
-monitor null
The root file system used for test can be downloaded in:
https://github.com/fengguang/reproduce-kernel-bug/tree/master/initrd
The one used
,
Huang, Ying
Darren Hart <dvh...@infradead.org> writes:
> On Tue, Mar 29, 2016 at 09:12:56AM +0800, Huang, Ying wrote:
>> Darren Hart <dvh...@infradead.org> writes:
>>
>> > On Mon, Mar 21, 2016 at 04:42:47PM +0800, Huang, Ying wrote:
>> >> Thomas Gleixner &l
Michal Hocko <mho...@kernel.org> writes:
> On Wed 27-04-16 16:20:43, Huang, Ying wrote:
>> Michal Hocko <mho...@kernel.org> writes:
>>
>> > On Wed 27-04-16 11:15:56, kernel test robot wrote:
>> >> FYI, we noticed vm-scalability.throughput -1
-scalability is swap-w-rand. An RAM emulated pmem
device is used as a swap device, and a test program will allocate/write
anonymous memory randomly to exercise page allocation, reclaiming, and
swapping in code path.
Best Regards,
Huang, Ying
Eric Dumazet <eduma...@google.com> writes:
> On Mon, May 9, 2016 at 6:26 PM, Huang, Ying <ying.hu...@linux.intel.com>
> wrote:
>> Hi, Eric,
>>
>> kernel test robot <ying.hu...@linux.intel.com> writes:
>>> FYI, we noticed the following commi
"Huang, Ying" <ying.hu...@intel.com> writes:
> Eric Dumazet <eduma...@google.com> writes:
>> On Mon, May 9, 2016 at 6:26 PM, Huang, Ying <ying.hu...@linux.intel.com>
>> wrote:
>>> Hi, Eric,
>>>
>>> kernel test robot <ying
y to reproduce that without __d_move()
> or __d_add() parts and see which one ends up triggering that crap?
Could you provide a debug branch in your tree for that?
Best Regards,
Huang, Ying
> Note that we don't *use* ->i_dir_seq yet...
> ___
> LKP mailing list
> l...@lists.01.org
> https://lists.01.org/mailman/listinfo/lkp
Al Viro <v...@zeniv.linux.org.uk> writes:
Hi, Viro,
> On Mon, Apr 18, 2016 at 10:08:37AM +0800, Huang, Ying wrote:
>
>> Could you provide a debug branch in your tree for that?
>
> Done - vfs.git#T1 and vfs.git#T2 resp.
The compare result betwee
s could queue
> flush tlb to the same CPU but only one IPI will be sent.
>
> Since the commit enter Linux v3.19, the counting problem only shows up
> from v3.19. Considering this is a behaviour change, I'm not sure if I
> should add the stable tag here.
>
> Signed-off-by: Aaron Lu <aaron...@intel.com>
Thanks for fix. You forget to add :)
Reported-by: "Huang, Ying" <ying.hu...@intel.com>
Best Regards,
Huang, Ying
% 3704 ± 23% -4.5% 4708 ± 21%
sched_debug.cpu.nr_load_updates.avg
276.50 ± 10% -4.4% 264.20 ± 7% -14.3% 237.00 ± 19%
sched_debug.cpu.nr_switches.min
Best Regards,
Huang, Ying
Linus Torvalds <torva...@linux-foundation.org> writes:
> On Wed, Aug 10, 2016 at 5:11 PM, Huang, Ying <ying.hu...@intel.com> wrote:
>>
>> Here is the comparison result with perf-profile data.
>
> Heh. The diff is actually harder to read than just showing A/B
>
ycles-pp.vfs_write": 0.94,
"perf-profile.func.cycles-pp.xfs_bmapi_delay": 0.93,
"perf-profile.func.cycles-pp.iomap_write_actor": 0.9,
"perf-profile.func.cycles-pp.pagecache_get_page": 0.89,
"perf-profile.func.cycles-pp.xfs_file_write_iter": 0.86,
"perf-profile.func.cycles-pp.xfs_file_iomap_begin": 0.81,
"perf-profile.func.cycles-pp.iov_iter_copy_from_user_atomic": 0.78,
"perf-profile.func.cycles-pp.iomap_apply": 0.77,
"perf-profile.func.cycles-pp.generic_write_end": 0.74,
"perf-profile.func.cycles-pp.xfs_file_buffered_aio_write": 0.72,
"perf-profile.func.cycles-pp.find_get_entry": 0.69,
"perf-profile.func.cycles-pp.__vfs_write": 0.67,
Best Regards,
Huang, Ying
Hi, Dave,
Dave Hansen <dave.han...@intel.com> writes:
> On 08/09/2016 09:17 AM, Huang, Ying wrote:
>> File pages uses a set of radix tags (DIRTY, TOWRITE, WRITEBACK) to
>> accelerate finding the pages with the specific tag in the the radix tree
>> during writing back a
From: Huang Ying <ying.hu...@intel.com>
This patch make it possible to charge or uncharge a set of continuous
swap entries in swap cgroup. The number of swap entries is specified
via an added parameter.
This will be used for THP (Transparent Huge Page) swap support. Where a
whole swap c
From: Huang Ying <ying.hu...@intel.com>
Swap cgroup uses a discontinuous array to store the information for the
swap entries. lookup_swap_cgroup() provides the good encapsulation to
access one element of the discontinuous array. To make it easier to
access multiple elements of the discont
From: Huang Ying <ying.hu...@intel.com>
In this patch, the size of swap cluster is changed to that of THP on
x86_64 (512). This is for THP (Transparent Huge Page) swap support on
x86_64. Where one swap cluster will be used to hold the contents of
each THP swapped out. And some infor
From: Huang Ying <ying.hu...@intel.com>
This is a code clean up patch without functionality changes. The
swap_cluster_list data structure and its operations is introduced to
provide some better encapsulation for free cluster and discard cluster
list operations. This avoid some code dupli
Hi, All,
"Huang, Ying" <ying.hu...@intel.com> writes:
> From: Huang Ying <ying.hu...@intel.com>
>
> This patchset is based on 8/4 head of mmotm/master.
>
> This is the first step for Transparent Huge Page (THP) swap support.
> The plan is to delay
ofiles usually are not all that easy to make
> sense of either. But comparing the before and after state might give
> us clues.
I have started perf-profile data collection, will send out the
comparison result soon.
Best Regards,
Huang, Ying
"Huang, Ying" <ying.hu...@intel.com> writes:
> Hi, Linus,
>
> Linus Torvalds <torva...@linux-foundation.org> writes:
>
>> On Wed, Aug 10, 2016 at 4:08 PM, Dave Chinner <da...@fromorbit.com> wrote:
>>>
>>> That, to me, says there's
Hi, Kim,
"Huang, Ying" <ying.hu...@intel.com> writes:
>>
>> [lkp] [f2fs] 3bdad3c7ee: aim7.jobs-per-min -25.3% regression
>> [lkp] [f2fs] b93f771286: aim7.jobs-per-min -81.2% regression
>>
>> In terms of the above regression, I could check that _reprod
Hi,
I checked the comparison result below and found this is a regression for
fsmark.files_per_sec, not fsmark.app_overhead.
Best Regards,
Huang, Ying
kernel test robot <xiaolong...@intel.com> writes:
> FYI, we noticed a -36.3% regression of fsmark.files_per_sec due to commit:
&
From: Huang Ying <ying.hu...@intel.com>
The definition of return value of madvise_free_huge_pmd is not clear
before. According to the suggestion of Minchan Kim, change the type of
return value to bool and return true if we do MADV_FREE successfully on
entire pmd page, otherwise, return
From: Huang Ying <ying.hu...@intel.com>
madvise_free_huge_pmd should return 0 if the fallback PTE operations are
required. In madvise_free_huge_pmd, if part pages of THP are discarded,
the THP will be split and fallback PTE operations should be used if
splitting succeeds. But the origina
Jaegeuk Kim <jaeg...@kernel.org> writes:
> On Thu, Aug 04, 2016 at 10:44:20AM -0700, Huang, Ying wrote:
>> Jaegeuk Kim <jaeg...@kernel.org> writes:
>>
>> > Hi Huang,
>> >
>> > On Thu, Aug 04, 2016 at 10:00:41AM -0700, Huang, Ying wrote:
From: Huang Ying <ying.hu...@intel.com>
File pages uses a set of radix tags (DIRTY, TOWRITE, WRITEBACK) to
accelerate finding the pages with the specific tag in the the radix tree
during writing back an inode. But for anonymous pages in swap cache,
there are no inode based writebac
From: Huang Ying <ying.hu...@intel.com>
The swap cluster allocation/free functions are added based on the
existing swap cluster management mechanism for SSD. These functions
don't work for traditional hard disk because the existing swap cluster
management mechanism doesn't work for it. Th
From: Huang Ying <ying.hu...@intel.com>
__swapcache_free() is added to support to clear SWAP_HAS_CACHE for huge
page. This will free the specified swap cluster now. Because now this
function will be called only in the error path to free the swap cluster
just allocated. So the corresp
From: Huang Ying <ying.hu...@intel.com>
This patchset is based on 8/4 head of mmotm/master.
This is the first step for Transparent Huge Page (THP) swap support.
The plan is to delaying splitting THP step by step and avoid splitting
THP finally during THP swapping out and sw
From: Huang Ying <ying.hu...@intel.com>
This patch enhanced the split_huge_page_to_list() to work properly for
THP (Transparent Huge Page) in swap cache during swapping out.
This is used for delaying splitting THP during swapping out. Where for
a THP to be swapped out, we will allocate
From: Huang Ying <ying.hu...@intel.com>
In this patch, the splitting huge page is delayed from almost the first
step of swapping out to after allocating the swap space for THP and
adding the THP into swap cache. This will reduce lock
acquiring/releasing for locks used for swap space an
From: Huang Ying <ying.hu...@intel.com>
Separates checking whether we can split the huge page from
split_huge_page_to_list() into a function. This will help to check that
before splitting the THP (Transparent Huge Page) really.
This will be used for delaying splitting THP during swappi
From: Huang Ying <ying.hu...@intel.com>
With this patch, a THP (Transparent Huge Page) can be added/deleted
to/from swap cache as a set of sub-pages (512 on x86_64).
This will be used for Transparent Huge Page (THP) swap support. Where
one THP may be added/delted to/from the swap
From: Huang Ying <ying.hu...@intel.com>
A variation of get_swap_page(), get_huge_swap_page(), is added to
allocate a swap cluster (512 swap slots) based on the swap cluster
allocation function. A fair simple algorithm is used, that is, only the
first swap device in priority list will be
f007ee6? I tried and seemed they can't apply clearly on it.
>>
>
> Please check attached updated debug patches.
>
> You should apply them after ed9f007ee6 in following sequence
>
> revert_79063a7.patch
> early_console_more_2_2x.patch
> early_console_more_2_2x_add_0.patch
> early_console_more_2_2x_add_1.patch
If you could provide a git branch for that, that will be easier for us
to test and more accurate for you to get the right patch to be tested.
Best Regards,
Huang, Ying
Hi, Chinner,
Dave Chinner <da...@fromorbit.com> writes:
> On Wed, Aug 10, 2016 at 06:00:24PM -0700, Linus Torvalds wrote:
>> On Wed, Aug 10, 2016 at 5:33 PM, Huang, Ying <ying.hu...@intel.com> wrote:
>> >
>> > Here it is,
>>
>> Thanks.
>>
fastpath": 0.79,
"perf-profile.func.cycles-pp.__might_sleep": 0.79,
"perf-profile.func.cycles-pp.xfs_file_iomap_begin_delay.isra.9": 0.7,
"perf-profile.func.cycles-pp.__list_del_entry": 0.7,
"perf-profile.func.cycles-pp.vfs_write": 0.69,
"perf-profile.func.cycles-pp.drop_buffers": 0.68,
"perf-profile.func.cycles-pp.xfs_file_write_iter": 0.67,
"perf-profile.func.cycles-pp.rwsem_spin_on_owner": 0.67,
Best Regards,
Huang, Ying
Hi, Kim,
Minchan Kim <minc...@kernel.org> writes:
> Hello Huang,
>
> On Tue, Aug 09, 2016 at 09:37:42AM -0700, Huang, Ying wrote:
>> From: Huang Ying <ying.hu...@intel.com>
>>
>> This patchset is based on 8/4 head of mmotm/master.
>>
>> Th
000,netdev=net0 -netdev user,id=net0 -boot
> order=nc -no-reboot -watchdog i6300esb -watchdog-action debug -rtc
> base=localtime -drive
> file=/fs/KVM/disk0-vm-intel12-openwrt-i386-1,media=disk,if=virtio
> -drive
> file=/fs/KVM/disk1-vm-intel12-openwrt-i386-1,media=disk,if=virtio
> -pidfile /dev/shm/kboot/pid-vm-intel12-openwrt-i386-1 -serial
> file:/dev/shm/kboot/serial-vm-intel12-openwrt-i386-1 -daemonize
> -display none -monitor null
>>
>>
>>
>>
>>
>> Thanks,
>> Xiaolong
>>
> can you please provide more info to help reproduced this crash ?
> on which operating system did this happen ?
> which HCA device was the rxe device attached to ? mlx4 or mlx5 ?
> thanks
The test is done in virtual machine. And it failed during boot stage,
so I think the root file system is not relevant. And there are no rxe
device in the virtual machine. So I guess your driver init code may not
run properly when compiled builtin and without the real device.
Best Regards,
Huang, Ying
Hi, Minchan,
Minchan Kim <minc...@kernel.org> writes:
> We changed swap_cluster_info lock from bit_spin_lock to spinlock
> so we need to initialize the spinlock before the using. Otherwise,
> lockdep is broken.
>
> Cc: Tim Chen <tim.c.c...@linux.intel.com>
> Cc: Hu
t. There is at least one nested
locking in cluster_list_add_tail(), and there are comments to describe
why it is safe. I will try to reproduce this and fix it.
Best Regards,
Huang, Ying
> Thanks.
>
> =
> [ INFO: possible recursive loc
Hi, Vincent,
Vincent Guittot <vincent.guit...@linaro.org> writes:
> On 4 January 2017 at 04:08, Huang, Ying <ying.hu...@intel.com> wrote:
>> Vincent Guittot <vincent.guit...@linaro.org> writes:
>>
>>>>
>>>> Vincent, like we discussed i
or 7 hours of load
> (but timings show it getting slower leading up to that). /proc/meminfo
> did not give me an immediate clue, Slab didn't look surprising but
> I may not have studied close enough.
Thanks for you information!
Memory newly allocated in the mm-swap series are allocated
"Huang, Ying" <ying.hu...@intel.com> writes:
> Hi, Hugh,
>
> Hugh Dickins <hu...@google.com> writes:
>
>> On Thu, 16 Feb 2017, Tim Chen wrote:
>>>
>>> > I do not understand your zest for putting wrappers around every little
>>
Borislav Petkov <b...@suse.de> writes:
> On Thu, Feb 16, 2017 at 06:24:39PM +0800, Huang, Ying wrote:
>> The NVS area is excluded when request the resources, because the NVS
>> area has been marked as busy already. But the whole BERT memory area is
>> mapped, so
Hugh Dickins <hu...@google.com> writes:
> On Thu, 16 Feb 2017, Huang, Ying wrote:
>
>> Hi, Minchan,
>>
>> Minchan Kim <minc...@kernel.org> writes:
>>
>> > Hi Huang,
>> >
>> > With changing from bit lock to spinlock of swa
ut didn't manage to narrow it down further because of hitting
> a presumably different issue inside the series, where swapoff ENOMEMed
> much sooner (after 25 mins one time, during first iteration the next).
I found a memory leak in __read_swap_cache_async() introduced by m
Borislav Petkov <b...@suse.de> writes:
> On Fri, Feb 17, 2017 at 08:31:12AM +0800, Huang, Ying wrote:
>> I am not sure whether the NVS area is a part of the BERT area, but
>> apparently they are overlapped in some way. We will access the whole
>> BERT area here
From: Huang Ying <ying.hu...@intel.com>
It was reported that on some machines, there is overlap between ACPI
NVS area and BERT address range. This appears reasonable because BERT
contents need to be non-volatile across reboot. But this will cause
resources conflict in current Linux
From: Huang Ying <ying.hu...@intel.com>
It was reported that some firmware will use ACPI NVS area for BERT
address range. This will cause resources conflict because the ACPI
NVS area is marked as busy already. Fix this via excluding ACPI NVS
area when requesting IO resources for BERT.
Re
Byungchul Park <byungchul.p...@lge.com> writes:
> On Mon, Feb 13, 2017 at 04:58:05PM +0900, Byungchul Park wrote:
>> On Mon, Feb 13, 2017 at 03:52:44PM +0800, Huang, Ying wrote:
>> > Byungchul Park <byungchul.p...@lge.com> writes:
>> >
>> > > On
llist_for_each,
> that is, llist_for_each_safe.
>
> Signed-off-by: Byungchul Park <byungchul.p...@lge.com>
Reviewed-by: "Huang, Ying" <ying.hu...@intel.com>
Best Regards,
Huang, Ying
> ---
> include/linux/llist.h | 19 +++
> 1 file changed, 19 insertions(+)
>
&g
ore.c | 13 ++---
> mm/vmalloc.c| 8 +++-
I think you need to split the patch. One for each subsystem, this makes
it easier to be reviewed and merged.
Best Regards,
Huang, Ying
[snip]
Byungchul Park <byungchul.p...@lge.com> writes:
> On Mon, Feb 13, 2017 at 03:36:33PM +0800, Huang, Ying wrote:
>> Byungchul Park <byungchul.p...@lge.com> writes:
>>
>> > Sometimes we have to dereference next field of llist node before entering
>>
& ((n) = (pos)->next, true); (pos) = (n))
> +
Following the style of other xxx_for_each_safe,
#define llist_for_each_safe(pos, n, node) \
for (pos = (node), (pos && (n = pos->next)); pos; pos = n, n =
pos->next)
Best Regards,
Huang, Ying
&
g in my tests. Could you try the
patches as below? And could you share your test case?
Best Regards,
Huang, Ying
->
>From 2b9e2f78a6e389442f308c4f9e8d5ac40fe6aa2f Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.hu...@intel.com
Borislav Petkov <b...@suse.de> writes:
> On Thu, Feb 16, 2017 at 12:42:00AM +0100, Rafael J. Wysocki wrote:
>> On Tuesday, February 14, 2017 10:01:13 AM Huang, Ying wrote:
>> > From: Huang Ying <ying.hu...@intel.com>
>> >
>> > It was reported th
Hi, Andrew,
This update patch is to fix the preemption warning raised by Michal
Hocko. raw_cpu_ptr() is used to replace this_cpu_ptr() and comments are
added for why it is used.
Best Regards,
Huang, Ying
-->
From: Tim Chen <tim.c.c...@linux.int
the preemption is not a way forward. So this is either preempt safe
> for some reason - which should be IMHO documented in a comment - and
> raw_cpu_ptr can be used or this needs a deeper thought.
Thanks for pointing out this.
We think this is preempt safe. During the development, we have
Minchan Kim <minc...@kernel.org> writes:
> Hi Huang,
>
> On Thu, Aug 18, 2016 at 10:19:32AM -0700, Huang, Ying wrote:
>> Minchan Kim <minc...@kernel.org> writes:
>>
>> > Hi Tim,
>> >
>> > On Wed, Aug 17, 2016 at 10:24:56AM -0700, Tim
Minchan Kim <minc...@kernel.org> writes:
> Hi Tim,
>
> On Wed, Aug 17, 2016 at 10:24:56AM -0700, Tim Chen wrote:
>> On Wed, 2016-08-17 at 14:07 +0900, Minchan Kim wrote:
>> > On Tue, Aug 16, 2016 at 07:06:00PM -0700, Huang, Ying wrote:
>> > >
&g
Minchan Kim <minc...@kernel.org> writes:
> On Thu, Aug 18, 2016 at 08:44:13PM -0700, Huang, Ying wrote:
>> Minchan Kim <minc...@kernel.org> writes:
>>
>> > Hi Huang,
>> >
>> > On Thu, Aug 18, 2016 at 10:19:32AM -0700, Huang, Yin
;perf-stat.branch-miss-rate": [
0.039853905034485354,
0.0402472142423231,
0.04380682345704418,
0.04319082390667179
],
branch-miss-rate decreased from ~0.30% to ~0.043%.
So I guess there are some code alignment change, which caused decreased
branch miss rate.
Best Regards,
Huang, Ying
Hi, Jaegeuk,
"Huang, Ying" <ying.hu...@intel.com> writes:
> Hi,
>
> I checked the comparison result below and found this is a regression for
> fsmark.files_per_sec, not fsmark.app_overhead.
>
> Best Regards,
> Huang, Ying
>
> kernel test robot <xiaol
From: Huang Ying <ying.hu...@intel.com>
Before using cluster lock in free_swap_and_cache(), the
swap_info_struct->lock will be held during freeing the swap entry and
acquiring page lock, so the page swap count will not change when
testing page information later. But after using clu
Hi, Christoph,
"Huang, Ying" <ying.hu...@intel.com> writes:
> Christoph Hellwig <h...@lst.de> writes:
>
>> Snipping the long contest:
>>
>> I think there are three observations here:
>>
>> (1) removing the mark_page_accessed (which i
601 - 700 of 3349 matches
Mail list logo