GCS wrote:
> 
> On Mon, Apr 15, 2002 at 07:16:13AM +0100, Christoph Hellwig <[EMAIL PROTECTED]> 
>wrote:
> > Could you please give me the context of that line for the kernels you use?
> > i.e. the function it is in and maybe the full function body?
> >
> > Having a kdb backtrace would also be nice.
>  Uf, I forgot about the function body, but details with kdb are coming:
> 
> bt gives:
> EBP,            EIP,            Function(args)
> 0xddeee000      0xc013cbfd      wait_for_buffers+0x4d(0,1,1,1,0)
> 0xddeeff64      0xc013ccc2      wait_for_locked_buffers+0x22(0,1,1,0,0)
> 0xddeeff94      0xc013cd10      sync_buffers+0x38(0,1,0xddeee000,1,0xbffffaf4)
>                 0xc0106da3      system_call+0x33
> 

That's a livelock in the lock-break patch.  Blame me for that.

It perhaps does imply that a large number of buffers of the
wrong dirty state are getting onto the wrong list.  JFS
maybe missing a refile_buffer() call somewhere.

Robert, I've updated the ll-patch thusly (hack atop of hack):

  static int wait_for_buffers(kdev_t dev, int index, int refile)
  {
        struct buffer_head * next;
        int nr;
+       int nr_rescheds = 0;

        nr = nr_buffers_type[index];
  repeat:
        next = lru_list[index];
        while (next && --nr >= 0) {
                struct buffer_head *bh = next;
                next = bh->b_next_free;

+               if (conditional_schedule_needed() && nr_rescheds < 100) {
+                       nr_rescheds++;
-               if (conditional_schedule_needed()) {
                        spin_unlock(&lru_list_lock);
                        unconditional_schedule();
_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to