sequential write to a file with blocksize less than PAGE_SIZE will call
mark_page_accessed multiple times,
if (!pagevec_space(pvec))
__pagevec_lru_add(pvec, lru);
it seems this trick fix this problem,but not quite thoroughly. there's a chance
that when another page was
sequential write to a file with blocksize less than PAGE_SIZE will call
mark_page_accessed multiple times,
if (!pagevec_space(pvec))
__pagevec_lru_add(pvec, lru);
it seems this trick fix this problem,but not quite thoroughly. there's a chance
that when another page was
On Thu, Oct 25, 2012 at 5:57 PM, Michal Hocko wrote:
> On Wed 24-10-12 11:44:17, Qiang Gao wrote:
>> On Wed, Oct 24, 2012 at 1:43 AM, Balbir Singh wrote:
>> > On Tue, Oct 23, 2012 at 3:45 PM, Michal Hocko wrote:
>> >> On Tue 23-10-12 18:10:33, Qiang Gao wrote:
&g
On Thu, Oct 25, 2012 at 5:57 PM, Michal Hocko mho...@suse.cz wrote:
On Wed 24-10-12 11:44:17, Qiang Gao wrote:
On Wed, Oct 24, 2012 at 1:43 AM, Balbir Singh bsinghar...@gmail.com wrote:
On Tue, Oct 23, 2012 at 3:45 PM, Michal Hocko mho...@suse.cz wrote:
On Tue 23-10-12 18:10:33, Qiang Gao
On Wed, Oct 24, 2012 at 1:43 AM, Balbir Singh wrote:
> On Tue, Oct 23, 2012 at 3:45 PM, Michal Hocko wrote:
>> On Tue 23-10-12 18:10:33, Qiang Gao wrote:
>>> On Tue, Oct 23, 2012 at 5:50 PM, Michal Hocko wrote:
>>> > On Tue 23-10-12 15:18:48, Qiang Gao wrote:
>&
On Tue, Oct 23, 2012 at 5:50 PM, Michal Hocko wrote:
> On Tue 23-10-12 15:18:48, Qiang Gao wrote:
>> This process was moved to RT-priority queue when global oom-killer
>> happened to boost the recovery of the system..
>
> Who did that? oom killer doesn't boost the prior
global-oom is the right thing to do. but oom-killed-process hanging on
do_exit is not the normal behavior
On Tue, Oct 23, 2012 at 5:01 PM, Sha Zhengju wrote:
> On 10/23/2012 11:35 AM, Qiang Gao wrote:
>>
>> information about the system is in the attach file "informat
:
> On Tue 23-10-12 11:35:52, Qiang Gao wrote:
>> I'm sure this is a global-oom,not cgroup-oom. [the dmesg output in the end]
>
> Yes this is the global oom killer because:
>> cglimit -M 700M ./tt
>> then after global-oom,the process hangs..
>
>> 179184 pages RAM
>
AM, Qiang Gao wrote:
>> information about the system is in the attach file "information.txt"
>>
>> I can not reproduce it in the upstream 3.6.0 kernel..
>>
>> On Sat, Oct 20, 2012 at 12:04 AM, Michal Hocko wrote:
>>> On Wed 17-10-12 18:23:34, gaoqiang w
, 2012 at 9:05 AM, Qiang Gao gaoqiangs...@gmail.com wrote:
information about the system is in the attach file information.txt
I can not reproduce it in the upstream 3.6.0 kernel..
On Sat, Oct 20, 2012 at 12:04 AM, Michal Hocko mho...@suse.cz wrote:
On Wed 17-10-12 18:23:34, gaoqiang wrote:
I
...@suse.cz wrote:
On Tue 23-10-12 11:35:52, Qiang Gao wrote:
I'm sure this is a global-oom,not cgroup-oom. [the dmesg output in the end]
Yes this is the global oom killer because:
cglimit -M 700M ./tt
then after global-oom,the process hangs..
179184 pages RAM
So you have ~700M of RAM so the memcg
global-oom is the right thing to do. but oom-killed-process hanging on
do_exit is not the normal behavior
On Tue, Oct 23, 2012 at 5:01 PM, Sha Zhengju handai@gmail.com wrote:
On 10/23/2012 11:35 AM, Qiang Gao wrote:
information about the system is in the attach file information.txt
I can
On Tue, Oct 23, 2012 at 5:50 PM, Michal Hocko mho...@suse.cz wrote:
On Tue 23-10-12 15:18:48, Qiang Gao wrote:
This process was moved to RT-priority queue when global oom-killer
happened to boost the recovery of the system..
Who did that? oom killer doesn't boost the priority (scheduling
On Wed, Oct 24, 2012 at 1:43 AM, Balbir Singh bsinghar...@gmail.com wrote:
On Tue, Oct 23, 2012 at 3:45 PM, Michal Hocko mho...@suse.cz wrote:
On Tue 23-10-12 18:10:33, Qiang Gao wrote:
On Tue, Oct 23, 2012 at 5:50 PM, Michal Hocko mho...@suse.cz wrote:
On Tue 23-10-12 15:18:48, Qiang Gao
I don't know whether the process will exit finally, bug this stack
lasts for hours, which is obviously unnormal.
The situation: we use a command calld "cglimit" to fork-and-exec the
worker process,and the "cglimit" will
set some limitation on the worker with cgroup. for now,we limit the
I don't know whether the process will exit finally, bug this stack
lasts for hours, which is obviously unnormal.
The situation: we use a command calld cglimit to fork-and-exec the
worker process,and the cglimit will
set some limitation on the worker with cgroup. for now,we limit the
memory,and
16 matches
Mail list logo