On Wed, 2005-02-09 at 08:37, Stephen C. Tweedie wrote:
> Hi,
> 
> On Wed, 2005-02-09 at 16:18, Badari Pulavarty wrote:
> 
> > I am trying to understand journaling code in ext3. 
> > Can some one enlighten me, why we need journal start
> > and stop in ext3_writeback_writepage() ? The block
> > allocation is already made in prepare_write().
> 
> prepare_write()/commit_write() are used for write(2) writes: the data is
> dirtied, but not immediately queued for IO (unless you're using O_SYNC).
>  
> writepage is used when you want to write the page's data to disk
> *immediately* --- it's used when the VM is swapping out an mmaped file,
> or for msync().
> 
> So when writepage comes in, there's no guarantee that we've had a
> previous prepare.  You can, for example, use ftruncate() to create a
> large hole in a file, and then mmap() it; if you then dirty a page, then
> the allocation occurs in the writepage().  So a transaction handle is
> necessary.
> 
> --Stephen

Okay, I started hacking. I added ext3_writeback_writepages() which calls
journal start/stop before calling mpage_writepages().

I am getting OOPs which puzzles me. 2 reasons why..

1) First of all OOps in is __mod_timer() which I have not touched.

2) journal_destory() is calling journal_start() now. But even
with the original code, it would be calling journal_start() in
ext3_writeback_writepage(). I am wondering why its a problem only
now.

Ideas ?


Thanks,
Badari


Unable to handle kernel NULL pointer dereference at 0000000000000018
RIP:
<ffffffff8013fa5b>{__mod_timer+219}
PML4 19b4f4067 PGD 19eb3c067 PMD 0
Oops: 0002 [1] SMP
CPU 3
Modules linked in:
Pid: 12823, comm: umount Not tainted 2.6.10n
RIP: 0010:[<ffffffff8013fa5b>] <ffffffff8013fa5b>{__mod_timer+219}
RSP: 0018:000001017f6abae8  EFLAGS: 00010002
RAX: 0000000000000010 RBX: 00000101d4fd7f08 RCX: 0000000000000260
RDX: ffffffff8013a428 RSI: 0000000000000216 RDI: 00000101c0715aa0
RBP: 00000101c0715aa0 R08: 00000000000927c0 R09: 0000000000000720
R10: 00000000ffffffff R11: 0000000000000000 R12: 00000101d4fd7ed8
R13: 00000101d4fd7ef0 R14: 00000001000086e1 R15: 0000000000000216
FS:  0000002a9588e700(0000) GS:ffffffff80628900(0000)
knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000018 CR3: 00000001bffa4000 CR4: 00000000000006e0
Process umount (pid: 12823, threadinfo 000001017f6aa000, task
000001017ee935a0)
Stack: 000000000007a000 00000101d6ac42c8 0000000000000000
000001019fa93000
       000001017eece3c0 000000000000000e 00000101d6ac42c8
ffffffff801fa130
       000001019fa93024 000000007f6abb48
Call Trace:<ffffffff801fa130>{start_this_handle+608}
<ffffffff80132580>{finish_task_switch+64}
       <ffffffff803eb330>{thread_return+80}
<ffffffff801fa6d3>{journal_start+227}
       <ffffffff801ea1e6>{ext3_writeback_writepages+70}
<ffffffff8015fcbc>{do_writepages+28}
       <ffffffff8019f50c>{__writeback_single_inode+492}
<ffffffff803eb9e0>{__wait_on_bit+96}
       <ffffffff8017eed0>{sync_buffer+0}
<ffffffff803ebac3>{out_of_line_wait_on_bit+195}
       <ffffffff8014bba0>{wake_bit_function+0}
<ffffffff8019f7c6>{write_inode_now+102}
       <ffffffff80196f4e>{generic_drop_inode+174}
<ffffffff80195b0e>{iput+126}
       <ffffffff801ff5ca>{journal_destroy+618}
<ffffffff8014bb70>{autoremove_wake_function+0}
       <ffffffff8014bb70>{autoremove_wake_function+0}
<ffffffff801aa28c>{mb_cache_shrink+188}
       <ffffffff801f06f9>{ext3_put_super+41}
<ffffffff80182a37>{generic_shutdown_super+151}
       <ffffffff80182afd>{kill_block_super+45}
<ffffffff80182bd1>{deactivate_super+81}
       <ffffffff8019974a>{sys_umount+666}
<ffffffff8026bd40>{__up_write+48}
       <ffffffff8016d57a>{sys_munmap+90}
<ffffffff8010e4ce>{system_call+126}
                                                                              
        
Code: 48 89 50 08 48 89 02 49 c7 44 24 08 00 02 20 00 49 c7 04 24
RIP <ffffffff8013fa5b>{__mod_timer+219} RSP <000001017f6abae8>
CR2: 0000000000000018


-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to