2011/2/28 Mitch Harder <mitch.har...@sabayonlinux.org>:
> 2011/2/28 Mitch Harder <mitch.har...@sabayonlinux.org>:
>> 2011/2/28 Maria Wikström <ma...@ponstudios.se>:
>>> mån 2011-02-28 klockan 11:10 -0500 skrev Josef Bacik:
>>>> On Mon, Feb 28, 2011 at 11:13:59AM +0100, Johannes Hirte wrote:
>>>> > On Monday 28 February 2011 02:46:05 Chris Mason wrote:
>>>> > > Excerpts from Mitch Harder's message of 2011-02-25 13:43:37 -0500:
>>>> > > > Some clarification on my previous message...
>>>> > > >
>>>> > > > After looking at my ftrace log more closely, I can see where Btrfs is
>>>> > > > trying to release the allocated pages.  However, the calculation for
>>>> > > > the number of dirty_pages is equal to 1 when "copied == 0".
>>>> > > >
>>>> > > > So I'm seeing at least two problems:
>>>> > > > (1)  It keeps looping when "copied == 0".
>>>> > > > (2)  One dirty page is not being released on every loop even though
>>>> > > > "copied == 0" (at least this problem keeps it from being an infinite
>>>> > > > loop by eventually exhausting reserveable space on the disk).
>>>> > >
>>>> > > Hi everyone,
>>>> > >
>>>> > > There are actually tow bugs here.  First the one that Mitch hit, and a
>>>> > > second one that still results in bad file_write results with my
>>>> > > debugging hunks (the first two hunks below) in place.
>>>> > >
>>>> > > My patch fixes Mitch's bug by checking for copied == 0 after
>>>> > > btrfs_copy_from_user and going the correct delalloc accounting.  This
>>>> > > one looks solved, but you'll notice the patch is bigger.
>>>> > >
>>>> > > First, I add some random failures to btrfs_copy_from_user() by failing
>>>> > > everyone once and a while.  This was much more reliable than trying to
>>>> > > use memory pressure than making copy_from_user fail.
>>>> > >
>>>> > > If copy_from_user fails and we partially update a page, we end up with 
>>>> > > a
>>>> > > page that may go away due to memory pressure.  But, btrfs_file_write
>>>> > > assumes that only the first and last page may have good data that needs
>>>> > > to be read off the disk.
>>>> > >
>>>> > > This patch ditches that code and puts it into prepare_pages instead.
>>>> > > But I'm still having some errors during long stress.sh runs.  Ideas are
>>>> > > more than welcome, hopefully some other timezones will kick in ideas
>>>> > > while I sleep.
>>>> >
>>>> > At least it doesn't fix the emerge-problem for me. The behavior is now 
>>>> > the same
>>>> > as with 2.6.38-rc3. It needs a 'emerge --oneshot dev-libs/libgcrypt' 
>>>> > with no
>>>> > further interaction to get the emerge-process hang with a svn-process
>>>> > consuming 100% CPU. I can cancel the emerge-process with ctrl-c but the
>>>> > spawned svn-process stays and it needs a reboot to get rid of it.
>>>>
>>>> Can you cat /proc/$pid/wchan a few times so we can get an idea of where 
>>>> it's
>>>> looping?  Thanks,
>>>>
>>>> Josef
>>>
>>> It behaves the same way here with btrfs-unstable.
>>> The output of "cat /proc/$pid/wchan" is 0.
>>>
>>> // Maria
>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>> the body of a message to majord...@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>
>>>
>>>
>>
>> I've applied the patch at the head of this thread (with the jiffies
>> debugging commented out) and I'm attaching a ftrace using the
>> function_graph tracer when I'm stuck in the loop.  I've just snipped
>> out a couple of the loops (the full trace file is quite large, and
>> mostly repititious).
>>
>> I'm going to try to modify file.c with some trace_printk debugging to
>> show the values of several of the relevant variables at various
>> stages.
>>
>> I'm going to try to exit the loop after 256 tries with an EFAULT so I
>> can stop the tracing at that point and capture a trace of the entry
>> into the problem (the ftrace ring buffer fills up too fast for me to
>> capture the entry point).
>>
>
> As promised, I'm put together a modified file.c with many trace_printk
> debugging statements to augment the ftrace.
>
> The trace is ~128K compressed (about 31,600 lines or 2.6MB
> uncompressed), so I'm putting it up on my local server instead of
> attaching.  Let me know if it would be more appropriate to send to the
> list as an attachment.
>
> http://dontpanic.dyndns.org/ftrace-btrfs-file-write-debug-v2.gz
>
> I preface all my trace_printk comments with "TPK:" to make skipping
> through the trace easier.
>
> The trace contains the trace of about 3 or 4 successful passes through
> the btrfs_file_aio_write() function to show what a successful trace
> looks like.
>
> The pass through the btrfs_file_aio_write() that breaks begins on line 1088.
>
> I let it loop through the while (iov_iter_count(&i) > 0) {} loop for
> 256 times when copied==0 (otherwise it would loop infinitely).  Then
> exit out and stop the trace.
>
> For reference, here's a diff file for the debugging statements I've
> added to file.c:
>
> Let me know if you would like me to re-run this trial with different
> debugging lines.
>
>  fs/btrfs/file.c /usr/src/linux/fs/btrfs/file.c
> --- fs/btrfs/file.c     2011-02-28 10:13:45.000000000 -0600
> +++ /usr/src/linux/fs/btrfs/file.c      2011-02-28 13:07:11.000000000 -0600
> @@ -53,12 +53,14 @@
>        int offset = pos & (PAGE_CACHE_SIZE - 1);
>        int total_copied = 0;
>
> +       /***************************
>        if ((jiffies % 10) == 0)
>                return 0;
>
>        if ((jiffies % 25) == 0) {
>                write_bytes /= 2;
>        }
> +       **************************/
>
>        while (write_bytes > 0) {
>                size_t count = min_t(size_t,
> @@ -82,10 +84,13 @@
>
>                /* Return to btrfs_file_aio_write to fault page */
>                if (unlikely(copied == 0)) {
> +                       trace_printk("TPK: unlikely copied == 0 in 
> btrfs_copy_from_user
> (total_copied=%i)\n",
> +                                    total_copied);
>                        break;
>                }
>
>                if (unlikely(copied < PAGE_CACHE_SIZE - offset)) {
> +                       trace_printk("TPK: unlikely copied < PAGE_CACHE_SIZE 
> - offset in
> btrfs_copy_from_user\n");
>                        offset += copied;
>                } else {
>                        pg++;
> @@ -910,8 +915,13 @@
>        int will_write;
>        int buffered = 0;
>        int copied = 0;
> +       int copied_problem = 0;
> +       int copied_loop_count = 0;
> +       int stop_ftrace = 0;
>        int dirty_pages = 0;
>
> +       trace_printk("TPK: Entering btrfs_file_aio_write()\n");
> +
>        will_write = ((file->f_flags & O_DSYNC) || IS_SYNC(inode) ||
>                      (file->f_flags & O_DIRECT));
>
> @@ -953,6 +963,7 @@
>        BTRFS_I(inode)->sequence++;
>
>        if (unlikely(file->f_flags & O_DIRECT)) {
> +               trace_printk("TPK: transferred to unlikely(file->f_flags \& 
> O_DIRECT)\n");
>                num_written = generic_file_direct_write(iocb, iov, &nr_segs,
>                                                        pos, ppos, count,
>                                                        ocount);
> @@ -984,6 +995,8 @@
>                 */
>                buffered = 1;
>                pos += num_written;
> +               trace_printk("TPK: end unlikely(file->f_flags \& O_DIRECT) 
> with
> num_written=%i\n",
> +                            num_written);
>        }
>
>        iov_iter_init(&i, iov, nr_segs, count, num_written);
> @@ -1003,6 +1016,8 @@
>        last_index = (pos + iov_iter_count(&i)) >> PAGE_CACHE_SHIFT;
>
>        while (iov_iter_count(&i) > 0) {
> +               trace_printk("TPK: start section while (iov_iter_count() > 
> 0)\n");
> +
>                size_t offset = pos & (PAGE_CACHE_SIZE - 1);
>                size_t write_bytes = min(iov_iter_count(&i),
>                                         nrptrs * (size_t)PAGE_CACHE_SIZE -
> @@ -1010,6 +1025,9 @@
>                size_t num_pages = (write_bytes + offset +
>                                    PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
>
> +               trace_printk("TPK: iov_iter_count()=%i || nr_segs=%lu || 
> nrptrs=%i
> || offset=%lu || write_bytes=%lu || num_pages=%lu\n",
> +                                               iov_iter_count(&i), nr_segs, 
> nrptrs, offset, write_bytes, num_pages);
> +
>                WARN_ON(num_pages > nrptrs);
>                memset(pages, 0, sizeof(struct page *) * nrptrs);
>
> @@ -1022,6 +1040,19 @@
>                        goto out;
>                }
>
> +               if (unlikely(copied_problem)) {
> +                       copied_loop_count++;
> +                       trace_printk("TPK: copied problem(%i)\n",
> +                                    copied_loop_count);
> +                       /* Give up if we've already tried 256 times */
> +                       if (copied_loop_count > 256) {
> +                               ret = -EFAULT;
> +                               stop_ftrace = 1;
> +                               trace_printk("TPK: copied loop count 
> exceeded, returning EFAULT....\n");
> +                               goto out;
> +                       }
> +               }
> +
>                ret = btrfs_delalloc_reserve_space(inode,
>                                        num_pages << PAGE_CACHE_SHIFT);
>                if (ret)
> @@ -1045,6 +1076,10 @@
>                                       PAGE_CACHE_SIZE - 1) >>
>                                       PAGE_CACHE_SHIFT;
>
> +               if (copied == 0) {
> +                       copied_problem = 1;
> +               }
> +
>                if (num_pages > dirty_pages) {
>                        if (copied > 0)
>                                atomic_inc(
> @@ -1080,11 +1115,19 @@
>                num_written += copied;
>
>                cond_resched();
> +               trace_printk("TPK: end section while (iov_iter_count() > 
> 0)\n");
> +               trace_printk(" copied=%i || dirty_pages=%i || num_written=%i 
> ||
> ending iov_iter_count=%i\n",
> +                                               copied, dirty_pages, 
> num_written, iov_iter_count(&i) );
>        }
>  out:
> +       trace_printk("TPK: Reached: out:\n");
> +
>        mutex_unlock(&inode->i_mutex);
> -       if (ret)
> +       if (ret) {
>                err = ret;
> +               trace_printk("TPK: ret,err had value %i when out: was reached
> (num_written: %i)\n",
> +                            ret, num_written);
> +       }
>
>        kfree(pages);
>        *ppos = pos;
> @@ -1140,6 +1183,19 @@
>        }
>  done:
>        current->backing_dev_info = NULL;
> +       if (ret) {
> +               trace_printk("TPK: btrfs_file_aio_write exiting with non-zero 
> ret
> value (%i)\n", ret);
> +               trace_printk("TPK: Returning num_written (%i) ? num_written 
> (%i) :
> err (%i) = %i\n",
> +                               num_written, num_written, err, num_written ? 
> num_written : err);
> +       } else {
> +               trace_printk("TPK: btrfs_file_aio_write exiting normally with 
> (%i)",
> +                            num_written ? num_written : err);
> +       }
> +
> +       if (unlikely(stop_ftrace)) {
> +               tracing_off();
> +       }
> +
>        return num_written ? num_written : err;
>  }
>

I'm developing a hypothesis that something is going wrong when Btrfs
is trying to lock multiple pages.

Page faults are being encountered at the same spot, over and over
(BTW, I ran a memcheck to rule that possibility out).

If I scan through my traces, it looks like most calls to write are
submitted 1 block (4096 bytes) at a time, at the most (also many are <
4096 bytes).  The portion that is causing a problem is trying to write
12288 bytes (3k).  For some reason, the first 24 bytes are written,
and the remainder of the loop is spent on the 12264 that are
remaining.  But page faults are encountered on each loop, and no more
bytes are copied.

I modified file.c to cut back on the scope of the attempted write to
smaller chunks.

The following patch "fixes" my problem, and this build completes
without error.  I'm not submitting this patch as a solution.  It's
clearly papering over a deeper issue.  However, I think it gives
insight into the problem.

I wrote the patch to allow for sequentially smaller blocks if failures
continue.  But the block cleared once I limited the scope to 4096
bytes.  It never needed to try smaller data segments.

As hoped, limiting the scope of the write allowed the 12264 bytes to
be written in the next three subsequent loops after lowering the scope
of the write.

It was interesting to note that cutting the scope to 4096 didn't
guarantee that you were limiting the write to one block. There was
usually some overlap, and 2 dirty block needed to be written.  But I'm
still suspicious that there is a mismatch somewhere when trying to
lock multiple blocks.

Here's the debugging patch I constructed which actually allowed the
build to complete without error.  This was applied (for testing
purposes) on top of the previous debugging patch.  As noted earlier,
it never went lower than a 4096 byte window for write_bytes.

--- file.c.file-write-patch-v1  2011-02-28 13:06:37.000000000 -0600
+++ file.c      2011-02-28 19:23:21.000000000 -0600
@@ -908,6 +908,7 @@
        ssize_t err = 0;
        size_t count;
        size_t ocount;
+       size_t write_bytes = 0;
        int ret = 0;
        int nrptrs;
        unsigned long first_index;
@@ -963,7 +964,7 @@
        BTRFS_I(inode)->sequence++;

        if (unlikely(file->f_flags & O_DIRECT)) {
-               trace_printk("TPK: transferred to unlikely(file->f_flags \& 
O_DIRECT)\n");
+               trace_printk("TPK: transferred to unlikely(file->f_flags & 
O_DIRECT)\n");
                num_written = generic_file_direct_write(iocb, iov, &nr_segs,
                                                        pos, ppos, count,
                                                        ocount);
@@ -995,7 +996,7 @@
                 */
                buffered = 1;
                pos += num_written;
-               trace_printk("TPK: end unlikely(file->f_flags \& O_DIRECT) with
num_written=%i\n",
+               trace_printk("TPK: end unlikely(file->f_flags & O_DIRECT) with
num_written=%i\n",
                             num_written);
        }

@@ -1019,9 +1020,20 @@
                trace_printk("TPK: start section while (iov_iter_count() > 
0)\n");

                size_t offset = pos & (PAGE_CACHE_SIZE - 1);
-               size_t write_bytes = min(iov_iter_count(&i),
-                                        nrptrs * (size_t)PAGE_CACHE_SIZE -
-                                        offset);
+               write_bytes = min(iov_iter_count(&i),
+                                 nrptrs * (size_t)PAGE_CACHE_SIZE -
+                                 offset);
+               if (unlikely(copied_problem)) {
+                       if (copied_loop_count > 128) {
+                               write_bytes = min(write_bytes, 4096);
+                       } else if (copied_loop_count > 160) {
+                               write_bytes = min(write_bytes, 2048);
+                       } else if (copied_loop_count > 192) {
+                               write_bytes = min(write_bytes, 1024);
+                       } else if (copied_loop_count > 224) {
+                               write_bytes = min(write_bytes, 512);
+                       }
+               }
                size_t num_pages = (write_bytes + offset +
                                    PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to