Re: reiser4 resize

2006-09-20 Thread Alexey Polyakov

On 9/19/06, David Masover [EMAIL PROTECTED] wrote:


When I have over a
gig of RAM free (not even buffer/cache, but _free_), and am trying to
download anything over BitTorrent, even if it's less than 200 megs, the
disk thrashes so badly that the system is really only usable for web and
email.  Even movies will occasionally stall when this is happening, and
by occasionally, I mean every minute or so.


Do you have this problem on plain vanilla + reiser4? I think some
out-of-tree patches (like adaptive readahead?) may result in such
behaviour, independent of file system.
With a free gig of RAM reiser4 shouldn't access disk a lot, only during flushes.

--
Alexey Polyakov


Re: reiser4 resize

2006-09-20 Thread Alexey Polyakov

On 9/20/06, Ɓukasz Mierzwa [EMAIL PROTECTED] wrote:


It's been proven that flushes are doing much more job then they should.
Not so long ago someone send a trace of block device io accesess during
reiser4 work and someone anylized it and said that some files or parts of
file where written over and over 200 times or so.  Bittottent downloads on
a few months old filesystem that had been used often just shows a week
spot in reiser4, while downloading files with azureus with only 64KB of
data per second I got disk lid on almost all the time, swithcing to
rtorrent helped as it does not seem to call fsync ( I think I disabled
fsync in azureus).


Ah, I see, if bittorrent calls fsync often, it's no wonder that
reiser4 behaves badly. I had to preload libnosync for some of my
programs that do fsync to avoid this.

--
Alexey Polyakov


reiser4 causing: kernel: general protection fault

2006-07-26 Thread Alexey Polyakov

Hi!
I'm using vanilla 2.6.17.1 x86_64 with reiser4-for-2.6.16-4.patch (no
other patches besides pom-ng).
It was working fine for about a month, but recently I got the
following error in log:

Jul 26 12:37:49 titanic kernel: general protection fault:  [1] SMP
Jul 26 12:37:49 titanic kernel: CPU 3
Jul 26 12:37:49 titanic kernel: Modules linked in: oprofile
Jul 26 12:37:49 titanic kernel: Pid: 1499, comm: ktxnmgrd:i2o/hd Not
tainted 2.6.17.1 #1
Jul 26 12:37:49 titanic kernel: RIP: 0010:[80354ecb]
80354ecb{load_and_lock_bnode+27}
Jul 26 12:37:49 titanic kernel: RSP: 0018:8100f5a0b9e8  EFLAGS: 00010296
Jul 26 12:37:49 titanic kernel: RAX: 00058cef7d4fa141 RBX:
002c297bea9b4a08 RCX: 8101fbca2cc0
Jul 26 12:37:49 titanic kernel: RDX: c21e4000 RSI:
8100f5a0ba38 RDI: 002c297bea9b4a08
Jul 26 12:37:49 titanic kernel: RBP: 002c297bea9b4a08 R08:
8100f5a0ba44 R09: 80078071
Jul 26 12:37:49 titanic kernel: R10:  R11:
80247ab0 R12: 8100f5a0bad8
Jul 26 12:37:49 titanic kernel: R13: 8101fc748000 R14:
810139d5bd80 R15: 80355b40
Jul 26 12:37:49 titanic kernel: FS:  2b68be055a40()
GS:810100186840() knlGS:
Jul 26 12:37:49 titanic kernel: CS:  0010 DS: 0018 ES: 0018 CR0:
8005003b
Jul 26 12:37:49 titanic kernel: CR2: 2affd237f000 CR3:
00019a015000 CR4: 06e0
Jul 26 12:37:49 titanic kernel: Process ktxnmgrd:i2o/hd (pid: 1499,
threadinfo 8100f5a0a000, task 810100313830)
Jul 26 12:37:49 titanic kernel: Stack: 0688
 8101f9c3b978 ff00
Jul 26 12:37:49 titanic kernel:80354dd1
002c297bea9b4a08  8100f5a0bad8
Jul 26 12:37:49 titanic kernel:8101fc748000 80355bb6
Jul 26 12:37:49 titanic kernel: Call Trace:
80354dd1{reiser4_adler32+49}
80355bb6{apply_dset_to_commit_bmap+118}
Jul 26 12:37:49 titanic kernel:
8032db79{blocknr_set_iterator+89}
80355deb{pre_commit_hook_bitmap+411}
Jul 26 12:37:49 titanic kernel:
80249f3b{try_to_wake_up+1003}
80321179{pre_commit_hook+9}
Jul 26 12:37:49 titanic kernel:
803287df{reiser4_write_logs+79}
80286480{__wake_up_common+64}
Jul 26 12:37:49 titanic kernel:
8022e6f3{__wake_up+67} 80265c2c{__up_wakeup+53}
Jul 26 12:37:49 titanic kernel:
80247ab0{mempool_free_slab+0}
8031b826{.text.lock.lock+5}
Jul 26 12:37:49 titanic kernel:
80321efe{atom_send_event+110}
80247ab0{mempool_free_slab+0}
Jul 26 12:37:49 titanic kernel:
80321659{txnh_get_atom+41}
803216ed{get_current_atom_locked_nocheck+29}
Jul 26 12:37:49 titanic kernel:80322325{txn_end+949}
80322519{txn_restart+9}
Jul 26 12:37:49 titanic kernel:
80322689{commit_some_atoms+313}
8032d6c4{ktxnmgrd+452}
Jul 26 12:37:49 titanic kernel:
8029a740{autoremove_wake_function+0}
80286480{__wake_up_common+64}
Jul 26 12:37:49 titanic kernel:
8029a740{autoremove_wake_function+0}
8029a3f0{keventd_create_kthread+0}
Jul 26 12:37:49 titanic kernel:8032d500{ktxnmgrd+0}
8029a3f0{keventd_create_kthread+0}
Jul 26 12:37:49 titanic kernel:80233719{kthread+217}
80261fce{child_rip+8}
Jul 26 12:37:49 titanic kernel:
8029a3f0{keventd_create_kthread+0}
8026120b{sysret_signal+28}
Jul 26 12:37:49 titanic kernel:80233640{kthread+0}
80261fc6{child_rip+0}
Jul 26 12:37:49 titanic kernel:
Jul 26 12:37:49 titanic kernel: Code: 8b 47 34 85 c0 74 10 f0 ff 0f 0f
88 60 03 00 00 31 c0 e9 40
Jul 26 12:37:49 titanic kernel: RIP
80354ecb{load_and_lock_bnode+27} RSP 8100f5a0b9e8

After that, most processes got stuck in D state and I had to to hard reset.
Is it a known reiser4 bug? Do I need to provide additional information
to help fixing it?


--
Alexey Polyakov


Re: Reiser4 for 2.6.17 Vanilla?

2006-06-22 Thread Alexey Polyakov

On 6/22/06, Raymond A. Meijer [EMAIL PROTECTED] wrote:


The thing is, I want to use 2.6.17-ck1 as well... I'll give it a shot with
this patch and see what happens :)


There should be one failed chunk (mm/readahead.c), just ignore it.

--
Alexey Polyakov


Re: reiser4 for 2.6.16 (version 3)

2006-05-30 Thread Alexey Polyakov

On 5/29/06, Vladimir V. Saveliev [EMAIL PROTECTED] wrote:


Please try it. Any feedback is welcome.


Applied it to 2.6.16.18. It behaved very well comparing to 2.6.15.6 w/
second .15 patch.
It's a pretty busy web server, I usually open `tail -fs.1 access.log`
in one ssh window, and `vmstat 1` in another. 2.6.16.18 still shows
small pauses during write-outs, though they are much shorter - 1-2
seconds, comparing to up to 10 seconds on 2.6.15.6, and it writes out
in ~20Mb/second chunks (and ~5Mb/second on 2.6.15.6).
So the performance part is great. But it crashes after few hours of
work. Logs are empty (tried it couple of times, first time it crashed
20 minutes after booting system, second time it survived for about 5
hours). No error messages in logs (logs are on r4 partition too, I'll
try to place logs on ext2 partition if I'm unable to reproduce this
crash on my home PC).

--
Alexey Polyakov


Re: reiser4: first impression (vs xfs and jfs)

2006-05-24 Thread Alexey Polyakov

Mine is 2xOpteron280, on a hardware RAID (Adaptec 2010S on 3xSCSI
146Gx15K). It's a heavily loaded web server. It suffers from
write-outs too. I've tested XFS and JFS, and found out that R4 behaves
better after system crash (due to power), and it gives much better
performance.

What I do for my server is:
1) Get vanilla
2) Do patch-o-matic-ng patches (I wonder why those patches are not
included in vanilla)
3) Apply latest available reiser4

Right now it looks like that:

[EMAIL PROTECTED] [~]# df -Tm
FilesystemType   1M-blocks  Used Available Use% Mounted on
/dev/i2o/hda2reiser49504  4488  5016  48% /
/dev/i2o/hda1ext3 995044  54% /boot
/dev/i2o/hda3reiser4   22659 13884  8776  62% /var
/dev/i2o/hda5reiser4 91729   889   4% /tmp
/dev/i2o/hda7reiser4   18135 14140  3995  78% /usr
/dev/i2o/hda6reiser4   54382 53278  1104  98% /home
/dev/i2o/hda8reiser4   54382 48583  5799  90% /home2
/dev/i2o/hda9reiser4  106370 62854 43517  60% /home3

What's the most interesting, I had (and continuing to have) a lot of
hardware crashes. Reiser4 does the best job - XFS would make some
files (created right before the crash) length 0, reiserfs would render
the fs unusable, and ext3 would lose up to 30% of files on a FS.


On 5/24/06, Tom Vier [EMAIL PROTECTED] wrote:

It's linux software raid1. 250gigs:

md1 : active raid1 sdd1[1] sdc1[0]
 262156544 blocks [2/2] [UU]

I should've mentioned:

Linux zero 2.6.16.16r4-2 #2 SMP PREEMPT Thu May 18 23:49:20 EDT 2006 i686
GNU/Linux

CONFIG_PREEMPT=y
CONFIG_PREEMPT_BKL=y

It's a dual 2.6ghz opteron box, running an x86 kernel.

On Tue, May 23, 2006 at 11:13:05PM +0400, Alexey Polyakov wrote:
 what kind of raid do you use? Is it software md, or a hw raid solution?
 Also, what's the size of your r4 partition?

 On 5/23/06, Tom Vier [EMAIL PROTECTED] wrote:
 I finally decided to try a few different fs'es on my 250gig raid1. (I use
 reiserfs3 most of the time.) Here's some things i noticed, between r4, xfs,
 and jfs.
 
 Both r4 and xfs suffer from io pauses. This is on a dual 2.6ghz opteron,
 btw. I don't see high cpu usage, but clock throttling could be screwing up
 top's % calcs (tho i think all usage is measured by time, so it shouldn't).
 
 What i'm doing is rsyncing from a slower drive (on 1394) to the raid1 dev.
 When using r4 (xfs behaves similarly), after several seconds, reading from
 the source and writing to the destination stops for 3 or 4 seconds, then
 brief burst of writes to the r4 fs (the dest), a 1 second pause, and then
 reading and periodic writes resume, until it happens again.
 
 It seems that both r4 and xfs allow a large number of pages to be dirtied,
 before queuing them for writeback, and this has a negative effect on
 throughput. In my test (rsync'ing ~50gigs of flacs), r4 and xfs are almost
 10 minutes slower than jfs.
 
 One thing that surprised me was, once r4 does write out, it is very fast.
 Fast enough that i wasn't sure it was actually writing whole files!
 However,
 i did a umount; mount and ran cksum, and sure enough, the files were good.
 8)
 
 --
 Tom Vier [EMAIL PROTECTED]
 DSA Key ID 0x15741ECE
 


 --
 Alexey Polyakov

--
Tom Vier [EMAIL PROTECTED]
DSA Key ID 0x15741ECE




--
Alexey Polyakov


Re: reiser4: first impression (vs xfs and jfs)

2006-05-24 Thread Alexey Polyakov

Hi Tom,

what kind of raid do you use? Is it software md, or a hw raid solution?
Also, what's the size of your r4 partition?

On 5/23/06, Tom Vier [EMAIL PROTECTED] wrote:

I finally decided to try a few different fs'es on my 250gig raid1. (I use
reiserfs3 most of the time.) Here's some things i noticed, between r4, xfs,
and jfs.

Both r4 and xfs suffer from io pauses. This is on a dual 2.6ghz opteron,
btw. I don't see high cpu usage, but clock throttling could be screwing up
top's % calcs (tho i think all usage is measured by time, so it shouldn't).

What i'm doing is rsyncing from a slower drive (on 1394) to the raid1 dev.
When using r4 (xfs behaves similarly), after several seconds, reading from
the source and writing to the destination stops for 3 or 4 seconds, then
brief burst of writes to the r4 fs (the dest), a 1 second pause, and then
reading and periodic writes resume, until it happens again.

It seems that both r4 and xfs allow a large number of pages to be dirtied,
before queuing them for writeback, and this has a negative effect on
throughput. In my test (rsync'ing ~50gigs of flacs), r4 and xfs are almost
10 minutes slower than jfs.

One thing that surprised me was, once r4 does write out, it is very fast.
Fast enough that i wasn't sure it was actually writing whole files! However,
i did a umount; mount and ran cksum, and sure enough, the files were good.
8)

--
Tom Vier [EMAIL PROTECTED]
DSA Key ID 0x15741ECE




--
Alexey Polyakov


Re: Kernel BUG at fs/reiser4/plugin/file/tail_conversion.c:80

2006-05-15 Thread Alexey Polyakov

Hi!

Did it reach the list?
I'm a bit worried cause I'm stuck at 2.6.15 and I really want to
upgrade to .16. :)

Thanks.

On 5/12/06, Alexey Polyakov [EMAIL PROTECTED] wrote:

Hi,

after running for a couple hours, kernel reports a different bug. Then
again processes get stuck in D state, and only hard reset helps.

Here's this new error message:

May 11 07:19:22 titanic kernel: 4reiser4[httpd(11176)]:
plugin_by_unsafe_id (fs/reiser4/plugin/plugin.c:296)[nikita-2913]:
May 11 07:19:22 titanic kernel: WARNING: Invalid plugin id: [2:235]
May 11 07:19:22 titanic kernel: Unable to handle kernel NULL pointer
dereference at 0004 RIP:
May 11 07:19:22 titanic kernel: 80229461{obtain_item_plugin+17}
May 11 07:19:22 titanic kernel: PGD 37d86067 PUD e1ba8067 PMD 0
May 11 07:19:22 titanic kernel: Oops:  [1] SMP
May 11 07:19:22 titanic kernel: CPU 3
May 11 07:19:22 titanic kernel: Modules linked in:
May 11 07:19:22 titanic kernel: Pid: 11176, comm: httpd Not tainted
2.6.16-cks9 #2
May 11 07:19:22 titanic kernel: RIP: 0010:[80229461]
80229461{obtain_item_plugin+17}
May 11 07:19:22 titanic kernel: RSP: 0018:8100edd39b48  EFLAGS: 00010292
May 11 07:19:22 titanic kernel: RAX:  RBX:
81016df732d0 RCX: 80406c68
May 11 07:19:22 titanic kernel: RDX:  RSI:
0292 RDI: 80406c60
May 11 07:19:22 titanic kernel: RBP: 81016df732d0 R08:
0003 R09: 0001
May 11 07:19:22 titanic kernel: R10:  R11:
80118cd0 R12: 81013e91f000
May 11 07:19:22 titanic kernel: R13: 80419000 R14:
81013e91f4f0 R15: 
May 11 07:19:22 titanic kernel: FS:  2ab7a85cc8e0()
GS:8101045370c0() knlGS:
May 11 07:19:22 titanic kernel: CS:  0010 DS:  ES:  CR0:
8005003b
May 11 07:19:22 titanic kernel: CR2: 0004 CR3:
d9e67000 CR4: 06e0
May 11 07:19:22 titanic kernel: Process httpd (pid: 11176, threadinfo
8100edd38000, task 810037d3f100)
May 11 07:19:23 titanic kernel: Stack: 81016df732d0
801f14d1 81016df732d0 801f17ad
May 11 07:19:23 titanic kernel:0001
801ed372 8100edd39b70 0001
May 11 07:19:23 titanic kernel: 
May 11 07:19:23 titanic kernel: Call Trace:
801f14d1{coord_num_units+17}
801f17ad{coord_init_after_item_end+13}
May 11 07:19:23 titanic kernel:
801ed372{carry_insert_flow+1106}
801eb9ab{carry+267}
May 11 07:19:23 titanic kernel:
801eac05{post_carry+85} 801ef267{insert_flow+263}
May 11 07:19:23 titanic kernel:
80221de5{write_tail+245} 801e8e35{jload_gfp+437}
May 11 07:19:23 titanic kernel:
80211bbe{extent2tail+942}
8020fd40{release_unix_file+192}
May 11 07:19:23 titanic kernel:80179d02{__fput+194}
80162a01{remove_vma+65}
May 11 07:19:23 titanic kernel:
8016417e{do_munmap+670} 801649e2{sys_munmap+82}
May 11 07:19:23 titanic kernel:8010aaf6{system_call+126}
May 11 07:19:23 titanic kernel:
May 11 07:19:23 titanic kernel: Code: 0f be 40 04 88 43 0c 5b c3 66 66
90 66 66 90 53 0f b6 47 0c
May 11 07:19:23 titanic kernel: RIP
80229461{obtain_item_plugin+17} RSP 8100edd39b48
May 11 07:19:23 titanic kernel: CR2: 0004


On 5/11/06, Alexander Zarochentsev [EMAIL PROTECTED] wrote:
 Hello.

 please apply the attached patch.


--
Alexey Polyakov




--
Alexey Polyakov


Re: Kernel BUG at fs/reiser4/plugin/file/tail_conversion.c:80

2006-05-11 Thread Alexey Polyakov

Hi,

after running for a couple hours, kernel reports a different bug. Then
again processes get stuck in D state, and only hard reset helps.

Here's this new error message:

May 11 07:19:22 titanic kernel: 4reiser4[httpd(11176)]:
plugin_by_unsafe_id (fs/reiser4/plugin/plugin.c:296)[nikita-2913]:
May 11 07:19:22 titanic kernel: WARNING: Invalid plugin id: [2:235]
May 11 07:19:22 titanic kernel: Unable to handle kernel NULL pointer
dereference at 0004 RIP:
May 11 07:19:22 titanic kernel: 80229461{obtain_item_plugin+17}
May 11 07:19:22 titanic kernel: PGD 37d86067 PUD e1ba8067 PMD 0
May 11 07:19:22 titanic kernel: Oops:  [1] SMP
May 11 07:19:22 titanic kernel: CPU 3
May 11 07:19:22 titanic kernel: Modules linked in:
May 11 07:19:22 titanic kernel: Pid: 11176, comm: httpd Not tainted
2.6.16-cks9 #2
May 11 07:19:22 titanic kernel: RIP: 0010:[80229461]
80229461{obtain_item_plugin+17}
May 11 07:19:22 titanic kernel: RSP: 0018:8100edd39b48  EFLAGS: 00010292
May 11 07:19:22 titanic kernel: RAX:  RBX:
81016df732d0 RCX: 80406c68
May 11 07:19:22 titanic kernel: RDX:  RSI:
0292 RDI: 80406c60
May 11 07:19:22 titanic kernel: RBP: 81016df732d0 R08:
0003 R09: 0001
May 11 07:19:22 titanic kernel: R10:  R11:
80118cd0 R12: 81013e91f000
May 11 07:19:22 titanic kernel: R13: 80419000 R14:
81013e91f4f0 R15: 
May 11 07:19:22 titanic kernel: FS:  2ab7a85cc8e0()
GS:8101045370c0() knlGS:
May 11 07:19:22 titanic kernel: CS:  0010 DS:  ES:  CR0:
8005003b
May 11 07:19:22 titanic kernel: CR2: 0004 CR3:
d9e67000 CR4: 06e0
May 11 07:19:22 titanic kernel: Process httpd (pid: 11176, threadinfo
8100edd38000, task 810037d3f100)
May 11 07:19:23 titanic kernel: Stack: 81016df732d0
801f14d1 81016df732d0 801f17ad
May 11 07:19:23 titanic kernel:0001
801ed372 8100edd39b70 0001
May 11 07:19:23 titanic kernel: 
May 11 07:19:23 titanic kernel: Call Trace:
801f14d1{coord_num_units+17}
801f17ad{coord_init_after_item_end+13}
May 11 07:19:23 titanic kernel:
801ed372{carry_insert_flow+1106}
801eb9ab{carry+267}
May 11 07:19:23 titanic kernel:
801eac05{post_carry+85} 801ef267{insert_flow+263}
May 11 07:19:23 titanic kernel:
80221de5{write_tail+245} 801e8e35{jload_gfp+437}
May 11 07:19:23 titanic kernel:
80211bbe{extent2tail+942}
8020fd40{release_unix_file+192}
May 11 07:19:23 titanic kernel:80179d02{__fput+194}
80162a01{remove_vma+65}
May 11 07:19:23 titanic kernel:
8016417e{do_munmap+670} 801649e2{sys_munmap+82}
May 11 07:19:23 titanic kernel:8010aaf6{system_call+126}
May 11 07:19:23 titanic kernel:
May 11 07:19:23 titanic kernel: Code: 0f be 40 04 88 43 0c 5b c3 66 66
90 66 66 90 53 0f b6 47 0c
May 11 07:19:23 titanic kernel: RIP
80229461{obtain_item_plugin+17} RSP 8100edd39b48
May 11 07:19:23 titanic kernel: CR2: 0004


On 5/11/06, Alexander Zarochentsev [EMAIL PROTECTED] wrote:

Hello.

please apply the attached patch.



--
Alexey Polyakov


Kernel BUG at fs/reiser4/plugin/file/tail_conversion.c:80

2006-05-06 Thread Alexey Polyakov

Hi!
I have two servers running reiser4 as root and data partitions.
Both are using 2.6.15.x kernels with reiser4-for-2.6.15-1.patch.
One is i386 UP with md over SATA, another is x86_64 SMP with i2o over
hardware raid.
I tried upgrading kernels on both servers to 2.6.16-cks9 (thats
basically 2.6.16.12 with some non-io related patches applied).
Both give me the same kind of error a few minutes after boot (during
intensive io):

May  4 14:12:12 titanic kernel: --- [cut here ] -
[please bite here ] -
May  4 14:12:21 titanic kernel: Kernel BUG at
fs/reiser4/plugin/file/tail_conversion.c:80
May  4 14:12:21 titanic kernel: invalid opcode:  [1] SMP
May  4 14:12:21 titanic kernel: CPU 3
May  4 14:12:21 titanic kernel: Modules linked in:
May  4 14:12:21 titanic kernel: Pid: 2723, comm: ci Not tainted 2.6.16-cks9 #1
May  4 14:12:21 titanic kernel: RIP: 0010:[80210f73]
80210f73{get_nonexclusive_access+35}
May  4 14:12:22 titanic kernel: RSP: 0018:8100f0d6dc88  EFLAGS: 00010286
May  4 14:12:32 titanic kernel: RAX: 8101d106bd40 RBX:
8101c7c740f8 RCX: 2b4bc74f2000
May  4 14:12:49 titanic kernel: RDX:  RSI:
 RDI: 8101c7c740f8
May  4 14:12:49 titanic kernel: RBP:  R08:
810037e05680 R09: 
May  4 14:12:56 titanic kernel: R10:  R11:
 R12: 8101c4ca2080
May  4 14:13:03 titanic kernel: R13: 8100f0d6dde8 R14:
8101c7c74198 R15: 8101debd4200
May  4 14:13:03 titanic kernel: FS:  2b4bc73f1b00()
GS:8101045370c0() knlGS:
May  4 14:13:03 titanic kernel: CS:  0010 DS:  ES:  CR0:
8005003b
May  4 14:13:03 titanic kernel: CR2: 005caff0 CR3:
f25b1000 CR4: 06e0
May  4 14:13:14 titanic kernel: Process ci (pid: 2723, threadinfo
8100f0d6c000, task 81000c0ad080)
May  4 14:13:17 titanic kernel: Stack: 8101c7c740f8
8022704b 8101e07dd928 8101d106bd40
May  4 14:13:17 titanic kernel:8101fee3a8f0
8038fd89 0001 8101ffcbe020
May  4 14:13:17 titanic kernel:0023230d 8101ddabc6d0
May  4 14:13:17 titanic kernel: Call Trace:
8022704b{write_extent+1595}
802294b1{item_length_by_coord+17}
May  4 14:13:17 titanic kernel:   
802244a9{nr_units_extent+9}

80225df8{init_coord_extension_extent+120}
May  4 14:13:17 titanic kernel:   
8020deb6{find_file_item+182}

801f25eb{reiser4_grab+155}
May  4 14:13:20 titanic kernel:   
80226a10{write_extent+0} 8020f8e5{write_flow+709}
May  4 14:13:20 titanic kernel:   
8010b7f1{error_exit+0} 8038d4a2{__down_read+18}
May  4 14:13:20 titanic kernel:   
8021030b{write_unix_file+923}

80178dac{vfs_write+236}
May  4 14:13:20 titanic kernel:   
80178f53{sys_write+83} 8010aaf6{system_call+126}

May  4 14:13:20 titanic kernel:
May  4 14:13:20 titanic kernel: Code: 0f 0b 68 b0 c8 3b 80 c2 50 00 66
66 90 e8 3b b6 17 00 48 89
May  4 14:13:21 titanic kernel: RIP
80210f73{get_nonexclusive_access+35} RSP 8100f0d6dc88
May  4 14:13:21 titanic kernel:  44reiser4[ci(2723)]:
release_unix_file (fs/reiser4/plugin/file/file.c:2670)[vs-44]:
May  4 14:13:21 titanic kernel: WARNING: out of memory?
May  4 14:13:21 titanic kernel: 4reiser4[ci(2723)]:
release_unix_file (fs/reiser4/plugin/file/file.c:2670)[vs-44]:
May  4 14:13:21 titanic kernel: WARNING: out of memory?
May  4 14:13:21 titanic kernel: 4reiser4[ci(2723)]:
release_unix_file (fs/reiser4/plugin/file/file.c:2670)[vs-44]:
May  4 14:13:21 titanic kernel: WARNING: out of memory?
May  4 14:13:21 titanic kernel: 4reiser4[ci(2723)]:
release_unix_file (fs/reiser4/plugin/file/file.c:2670)[vs-44]:
May  4 14:13:22 titanic kernel: WARNING: out of memory?
May  4 14:13:23 titanic kernel: 4reiser4[ci(2723)]:
release_unix_file (fs/reiser4/plugin/file/file.c:2670)[vs-44]:
May  4 14:13:25 titanic kernel: WARNING: out of memory?
May  4 14:13:25 titanic kernel: 4reiser4[ci(2723)]:
release_unix_file (fs/reiser4/plugin/file/file.c:2670)[vs-44]:
May  4 14:13:25 titanic kernel: WARNING: out of memory?
May  4 14:13:25 titanic kernel: --- [cut here ] -
[please bite here ] -

After that happens, processes get stuck in D state, and only hard reboot helps.
Is there any patches that might help with this issue? Should I provide
any additional information to help with this bug?

Thanks.

--
Alexey Polyakov