On Fri, 27 May 2016 00:42:07 +0200
Diego Torres wrote:
> Btrfs is the only fs that can add drives one by one to an existing raid
> setup, and use the new space inmediately, without replacing all the drives.
Ext4, XFS, JFS or pretty much any FS which can be resized
I was de-fragmenting a BTRFS recently and it was quite slow, so I
decided to pipe find output to xargs -P 2 to speed up the file listing
process and found out it breaks BTRFS quite fast. In 2 minutes, I had
this in dmesg. As expected, btrfs command is deadlocked.
btrfs-progs v4.2.2
Make sure that we can handle multiple bmbt records mapping to a
single rmapbt record. This can happen if you fallocate more than
2^21 contiguous blocks to a file.
(Also add some helpers that can create huge devices with some dm-zero
and dm-snapshot fakery.)
v2: remove irrelevant t_immutable
If we're doing write/overwrite/snapshot/resource exhaustion tests on
the scratch device, use the test directory to hold the loop
termination signal files. This way we don't run infinitely because we
can't create the flag due to ENOSPC.
v2: put the control files in /tmp, not $TEST_DIR
I've being running newer version of just-backup-btrfs, which was configured to
remove snapshots in batches ~ at least 3x100 at once (this is what I typically
have in 1.5-2 days).
Snapshots transferring become much faster, however when I delete 300 snapshots
at once, well... you can imagine
Any comment?
This patch does not fix the submitted generic/352[1] and generic/353[2]
test cases, but also introduce a much better structure and design for
later backref walk use.
Instead of a list and do a O(n^3)~O(n^4) iteration for fiemap ioctl on a
reflinked(deduped) file, it's now only
Hi Liu,
Since btrfs-convert has been reworked to use a completely new method to
do such allocation, now it doesn't need any custom_alloc_extent() function.
And its new chunk lay out is designed to only put metadata chunk into
large enough unused space, and data chunks will cover all used
Diego Torres posted on Fri, 27 May 2016 00:42:07 +0200 as excerpted:
> I've been using btrfs with a raid5 configuration with 3 disks for 6
> months, and then with 4 disks for a couple of months more. I run a
> weekly scrub, and a monthly balance. Btrfs is the only fs that can add
> drives one by
During btrfs-convert, it can allocate space from METADATA block
group for data, which is not supposed to be correct, although it
doesn't cause any serious problem except eating METADATA space
more quickly.
Signed-off-by: Liu Bo
---
btrfs-convert.c | 6 ++
1 file
Without proper hint, btrfs-convert always starts searching
from the very first available block which usually belongs
to SYSTEM block group, but we're not allowed to use any
block in SYSTEM block group for metadata/data.
This adds hint to make convert smarter.
Signed-off-by: Liu Bo
On Thu, May 26, 2016 at 05:04:01PM -0700, Mark Fasheh wrote:
> On Fri, May 20, 2016 at 05:45:12AM +0200, Adam Borowski wrote:
> > (Only btrfs currently implements dedupe_file_range.)
> >
> > Instead of checking the mode of the file descriptor, let's check whether
> > it could have been opened rw.
kreij...@inwind.it wrote on 2016/05/26 13:08 +0200:
Messaggio originale
Da: "Qu Wenruo"
Data: 26/05/2016 2.38
A:
Ogg: Re: [PATCH 1/2] btrfs-progs: utils: Introduce new pseudo random API
Goffredo Baroncelli wrote on 2016/05/25 18:15
The btrfs balance operation is significantly slower when qgroups are
enabled. To the best of my knowledge, a balance shouldn't have an effect on
qgroups counts (extents are not changing between subvolumes), so we don't
need to actually run the qgroup code when we balance.
Since there's only one
On Thu, May 26, 2016 at 11:27:06AM +0200, David Sterba wrote:
> Hi,
>
> please pull a few more patches that did not go to pull #1 for 4.7, minor
> cleanups and fixes. Thanks.
Thanks Dave! Trying to figure out why we're failing btrfs/011, but I
don't see how it could be related to this bunch.
On Fri, May 20, 2016 at 05:45:12AM +0200, Adam Borowski wrote:
> (Only btrfs currently implements dedupe_file_range.)
>
> Instead of checking the mode of the file descriptor, let's check whether
> it could have been opened rw. This allows fixing failures when deduping
> a live system: anyone
On Fri, May 20, 2016 at 05:45:12AM +0200, Adam Borowski wrote:
> (Only btrfs currently implements dedupe_file_range.)
>
> Instead of checking the mode of the file descriptor, let's check whether
> it could have been opened rw. This allows fixing failures when deduping
> a live system: anyone
Hi there,
I've been using btrfs with a raid5 configuration with 3 disks for 6
months, and then with 4 disks for a couple of months more. I run a
weekly scrub, and a monthly balance. Btrfs is the only fs that can add
drives one by one to an existing raid setup, and use the new space
inmediately,
On 19/05/16 02:33, Qu Wenruo wrote:
>
>
> Graham Cobb wrote on 2016/05/18 14:29 +0100:
>> A while ago I had a "no space" problem (despite fi df, fi show and fi
>> usage all agreeing I had over 1TB free). But this email isn't about
>> that.
>>
>> As part of fixing that problem, I tried to do a
On Wed, 11 May 2016, Filipe Manana wrote:
I've noticed some time ago that our device replace implementation is
unreliable. Basically under several situations it ends up not copying
extents (both data and metadata) from the old device to the new device
(I briefly talked about some of the
Under the last several kernels versions (4.6 and I believe 4.4 and, 4.5) btrfs
scrub aborts before completing.
If I boot back into an older kernel (4.1 or 4.3, not sure about 4.2) then it
runs to completion without any issues.
Steps to reproduce:
1 - make a raid1 system
2 - run with only one
On Thu, May 26, 2016 at 12:19:52PM -0400, Zygo Blaxell wrote:
> I frequently see these in /etc/lvm/backup/*. Something that LVM does
> when it writes these files triggers the problem. This problem occurs
> in kernels 3.18..4.4.11 (i.e. all the kernels I've tested).
>
> btrfs-debug-tree finds
I frequently see these in /etc/lvm/backup/*. Something that LVM does
when it writes these files triggers the problem. This problem occurs
in kernels 3.18..4.4.11 (i.e. all the kernels I've tested).
btrfs-debug-tree finds this:
item 26 key (2702988 INODE_ITEM 0) itemoff 12632 itemsize
On Thu, May 26, 2016 at 01:04:24AM -0700, Christoph Hellwig wrote:
> > --- a/src/t_immutable.c
> > +++ b/src/t_immutable.c
> > @@ -38,10 +38,10 @@
> > #include
> > #include
> > #include
> > +#include
> > #include
> > #include
> > #include
> > -#include
> > #include
> >
> >
On Thu, May 26, 2016 at 01:09:48AM -0700, Christoph Hellwig wrote:
> Shouldn't these tests also add a _require_test now? But we probably
> should just move them to /tmp instead?
Yes. I'll move 'em to /tmp.
--D
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the
>Messaggio originale
>Da: "Qu Wenruo"
>Data: 26/05/2016 2.38
>A:
>Ogg: Re: [PATCH 1/2] btrfs-progs: utils: Introduce new pseudo random API
>
>
>
>Goffredo Baroncelli wrote on 2016/05/25 18:15 +0200:
>> On 2016-05-25 06:14, Qu Wenruo wrote:
Previously btrfs-image restore would set the chunk items to have 1 stripe,
even if the chunk is dup. If you use btrfsck on the restored file system,
some dev_extent will not find any relative chunk stripe, and the
bytes-used of dev_item will not equal to the dev_extents's total_bytes.
This patch
Hi,
please pull a few more patches that did not go to pull #1 for 4.7, minor
cleanups and fixes. Thanks.
The following changes since commit c315ef8d9db7f1a0ebd023a395ebdfde1c68057e:
Merge branch 'for-chris-4.7' of
git://git.kernel.org/pub/scm/linux/kernel/git/fdmanana/linux into
On Thu, May 26, 2016 at 08:47:39AM +, sri wrote:
> In traditional file systems such as ext3/ext4 when a snapshot (say with
> LVM2) is taken, all I/O are frozen and entire file system including meta
> dta is part of snapshot.
>
> Btrfs snapshot also managed by file system itself and btrfs
In traditional file systems such as ext3/ext4 when a snapshot (say with
LVM2) is taken, all I/O are frozen and entire file system including meta
dta is part of snapshot.
Btrfs snapshot also managed by file system itself and btrfs snapshot is
actually does a sub volume snapshot which makes a
Looks fine,
Reviewed-by: Christoph Hellwig
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, May 25, 2016 at 10:57:36PM -0700, Darrick J. Wong wrote:
> Since none of the current filesystems support reflinked swap files,
> make sure that we prohibit reflinking of swapfiles and swapon of
> reflinked files.
Ah, thanks. I've actually prepared a patch to fix swapon in the kernel
and
Looks fine,
Reviewed-by: Christoph Hellwig
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks fine,
Reviewed-by: Christoph Hellwig
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Shouldn't these tests also add a _require_test now? But we probably
should just move them to /tmp instead?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
> +for i in `seq 125 -1 90`; do
> + fillsize=`expr $i \* 1048576`
> + out="$(_fill_scratch $fillsize 2>&1)"
> + echo "$out" | grep -q 'No space left on device' && continue
> + test -n "${out}" && echo "$out"
> + break
> +done
That's a bit of an odd loop, and it would seem an
> --- a/src/t_immutable.c
> +++ b/src/t_immutable.c
> @@ -38,10 +38,10 @@
> #include
> #include
> #include
> +#include
> #include
> #include
> #include
> -#include
> #include
>
> #ifndef XFS_SUPER_MAGIC
How does this belong into the patch?
> diff --git a/tests/xfs/group
Since the we are using atomic and wait queue for block group
reservations and it's not controlled by lockdep, we need pay much more
attention to any modification to write path.
Or it's very easy to under flow block group reservations and cause lock
balance.
Add warning on for
Since the we are using atomic and wait queue for block group
reservations and it's not controlled by lockdep, we need pay much more
attention to any modification to write path.
Or it's very easy to under flow block group reservations and cause lock
balance.
Add warning on for
38 matches
Mail list logo