i'm still getting VM related lockups during heavy write load, in
test9-pre5 + your 2.4.0-t9p2-vmpatch (which i understand as being your
last VM related fix-patch, correct?). Here is a histogram of such a
lockup:
1 Trace; 4010a720 <__switch_to+38/e8>
5 Trace; 4010a74b <__switch_to+63/e8>
13 Trace; 4010abc4
819 Trace; 4010abca
1806 Trace; 4010abce
1 Trace; 4010abd0
2 Trace; 4011af51
1 Trace; 4011af77
1 Trace; 4011b010
3 Trace; 4011b018
1 Trace; 4011b02d
1 Trace; 4011b051
1 Trace; 4011b056
2 Trace; 4011b05c
3 Trace; 4011b06d
4 Trace; 4011b076
537 Trace; 4011b2bb
2 Trace; 4011b2c6
1 Trace; 4011b2c9
4 Trace; 4011b2d5
31 Trace; 4011b31a
1 Trace; 4011b31d
1 Trace; 4011b32a
1 Trace; 4011b346
11 Trace; 4011b378
2 Trace; 4011b381
5 Trace; 4011b3f8
17 Trace; 4011b404
9 Trace; 4011b43f
1 Trace; 4011b450
1 Trace; 4011b457
2 Trace; 4011b48c
1 Trace; 4011b49c
428 Trace; 4011b4cd
6 Trace; 4011b4f7
4 Trace; 4011b500
2 Trace; 4011b509
1 Trace; 4011b560
1 Trace; 4011b809 <__wake_up+79/3f0>
1 Trace; 4011b81b <__wake_up+8b/3f0>
8 Trace; 4011b81e <__wake_up+8e/3f0>
310 Trace; 4011ba90 <__wake_up+300/3f0>
1 Trace; 4011bb7b <__wake_up+3eb/3f0>
2 Trace; 4011c32b
244 Trace; 4011d40e
1 Trace; 4011d411
1 Trace; 4011d56c
618 Trace; 4011d62e
2 Trace; 40122f28
2 Trace; 40126c3c
1 Trace; 401377ab
1 Trace; 401377c8
5 Trace; 401377cc
15 Trace; 401377d4
11 Trace; 401377dc
2 Trace; 401377e0
6 Trace; 401377ee
8 Trace; 4013783c
1 Trace; 401378f8
3 Trace; 4013792d
2 Trace; 401379af
2 Trace; 401379f3
1 Trace; 40138524 <__alloc_pages+7c/4b8>
1 Trace; 4013852b <__alloc_pages+83/4b8>
(first column is number of profiling hits, profiling hits taken on all
CPUs.)
unfortunately i havent captured which processes are running. This is an
8-CPU SMP box, 8 write-intensive processes are running, they create new
1k-1MB files in new directories - a total of many gigabytes.
this lockup happens both during vanilla test9-pre5 and with
2.4.0-t9p2-vmpatch. Your patch makes the lockup happen a bit later than
previous, but it still happens. During the lockup all dirty buffers are
written out to disk until it reaches such a state:
2162688 pages of RAM
1343488 pages of HIGHMEM
116116 reserved pages
652826 pages shared
0 pages swap cached
0 pages in page table cache
Buffer memory:52592kB
CLEAN: 664 buffers, 2302 kbyte, 5 used (last=93), 0 locked, 0 protected, 0 dirty
LOCKED: 661752 buffers, 2646711 kbyte, 37 used (last=661397), 0 locked, 0
protected, 0 dirty
DIRTY: 17 buffers, 26 kbyte, 1 used (last=1), 0 locked, 0 protected, 17 dirty
no disk IO happens anymore, but the lockup persists. The histogram was
taken after all disk IO has stopped.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/