On 16.11.2017 01:47, Liu Bo wrote:
> This changes to use '_scratch_cycle_mount' to drop all caches btrfs could have
> in order to avoid an issue that drop_caches somehow doesn't work on Nikolay's
> box.
> 
> Also use bash -c to run 'read' only when %pid is odd so that we can read the
> faulty disk.
> 
> Reported-by: Nikolay Borisov <nbori...@suse.com>
> Signed-off-by: Liu Bo <bo.li....@oracle.com>
> ---
> v2: Change 'fadvise -d' to _scratch_cycle_mount.
> 
> To Nikolay,
> 
> I didn't add your tested-by, but could you please verify if this also
> works on your test box?

Just did around 50 runs and didn't fail once. Without it failed on the
second iteration so :

Tested-by: Nikolay Borisov <nbori...@suse.com>

> 
> 
>  tests/btrfs/143 | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/tests/btrfs/143 b/tests/btrfs/143
> index da7bfd8..3875b6c 100755
> --- a/tests/btrfs/143
> +++ b/tests/btrfs/143
> @@ -127,16 +127,16 @@ echo "step 3......repair the bad copy" >>$seqres.full
>  # since raid1 consists of two copies, and the bad copy was put on stripe #1
>  # while the good copy lies on stripe #0, the bad copy only gets access when 
> the
>  # reader's pid % 2 == 1 is true
> -while true; do
> -     # start_fail only fails the following buffered read so the repair is
> -     # supposed to work.
> -     echo 3 > /proc/sys/vm/drop_caches
> -     start_fail
> -     $XFS_IO_PROG -c "pread 0 4K" "$SCRATCH_MNT/foobar" > /dev/null &
> -     pid=$!
> -     wait
> -     stop_fail
> -     [ $((pid % 2)) == 1 ] && break
> +while [[ -z ${result} ]]; do
> +    # invalidate the page cache.
> +    _scratch_cycle_mount
> +
> +    start_fail
> +    result=$(bash -c "
> +        if [[ \$((\$\$ % 2)) -eq 1 ]]; then
> +                exec $XFS_IO_PROG -c \"pread 0 4K\" \"$SCRATCH_MNT/foobar\"
> +        fi");
> +    stop_fail
>  done
>  
>  _scratch_unmount
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to