On Mon, Jun 03, 2019 at 02:40:09PM +0800, Qu Wenruo wrote:
> If a filesystem doesn't map its logical address space (normally the
> bytenr/blocknr returned by fiemap) directly to its devices(s), the
> following assumptions used in the test case is no longer true:
> - trim range start beyond the end
There is a long lived bug that btrfs wait for readahead to finish
indefinitely when readahead zone is inserted into seed devices.
Current write size to the file "foobar" is too small to run readahead
before the replacing on seed device. So, increase the write size to
reproduce the issue.
Followin
I forgot to update the expected output. Ignore this.
On 2019/06/07 11:34, Naohiro Aota wrote:
> There is a long lived bug that btrfs wait for readahead to finish
> indefinitely when readahead zone is inserted into seed devices.
>
> Current write size to the file "foobar" is too small to run reada
There is a long lived bug that btrfs wait for readahead to finish
indefinitely when readahead zone is inserted into seed devices.
Current write size to the file "foobar" is too small to run readahead
before the replacing on seed device. So, increase the write size to
reproduce the issue.
Followin
On 2019/06/06 20:14, Filipe Manana wrote:
> On Thu, Jun 6, 2019 at 8:56 AM Naohiro Aota wrote:
>>
>> Currently, btrfs does not consult seed devices to start readahead. As a
>> result, if readahead zone is added to the seed devices, btrfs_reada_wait()
>> indefinitely wait for the reada_ctl to finis
> On Jun 6, 2019, at 7:10 AM, Vaneet Narang wrote:
>
> Hi Andrew / David,
>
>
>>> > > -ZSTD_parameters params = ZSTD_getParams(level, src_len, 0);
>>> > > +static ZSTD_parameters params;
>>> >
>>> > > +
>>> > > +params = ZSTD_getParams(level, src_len, 0);
>>>
On Thu, Jun 06, 2019 at 01:39:07PM +0200, Johannes Thumshirn wrote:
[...]
> +echo -e "\nTesting argument validation with options"
> +$BTRFS_UTIL_PROG property set $SCRATCH_MNT compression 'lzo:9'
Please don't take this patch yet, setting lzo with a compression level should
fail but doesn't. I'll
On Thu, Jun 6, 2019 at 2:54 PM Nikolay Borisov wrote:
>
> This patch removes all haphazard code implementing nocow writers
> exclusion from pending snapshot creation and switches to using the drw
> lock to ensure this invariant still holds. "Readers" are snapshot
> creators from create_snapshot an
On Thu, Jun 6, 2019 at 2:55 PM Nikolay Borisov wrote:
>
> A (D)ouble (R)eader (W)riter lock is a locking primitive that allows
> to have multiple readers or multiple writers but not multiple readers
> and writers holding it concurrently. The code is factored out from
> the existing open-coded lock
I have a btrfs filesystem which I want to scrub. This is a multi-TB
filesystem and will take well over 24 hours to scrub.
Unfortunately, the scrub turns out to be quite intrusive into the system
(even when making sure it is very low priority for ionice and nice).
Operations on other disks run exce
Hi Andrew / David,
>> > > -ZSTD_parameters params = ZSTD_getParams(level, src_len, 0);
>> > > +static ZSTD_parameters params;
>> >
>> > > +
>> > > +params = ZSTD_getParams(level, src_len, 0);
>> >
>> > No thats' broken, the params can't be static as it depends on level
A (D)ouble (R)eader (W)riter lock is a locking primitive that allows
to have multiple readers or multiple writers but not multiple readers
and writers holding it concurrently. The code is factored out from
the existing open-coded locking scheme used to exclude pending
snapshots from nocow writers a
This patch removes all haphazard code implementing nocow writers
exclusion from pending snapshot creation and switches to using the drw
lock to ensure this invariant still holds. "Readers" are snapshot
creators from create_snapshot and 'writers' are nocow writers from
buffered write path or btrfs_s
This patchset first factors out the open code which essentially implements a
lock that allows to have either multiple reader or multiple writers but not
both. Then patch 2 just converts the code to using the newly introduced lock.
The individual patch descriptions contain more information about
On Mon, May 13, 2019 at 6:59 PM David Sterba wrote:
>
> On Mon, May 13, 2019 at 05:43:55PM +0100, Filipe Manana wrote:
> > On Mon, May 13, 2019 at 5:31 PM David Sterba wrote:
> > >
> > > On Mon, Apr 22, 2019 at 04:44:09PM +0100, fdman...@kernel.org wrote:
> > > > From: Filipe Manana
> > > > ---
From: Filipe Manana
When we log an inode, regardless of logging it completely or only that it
exists, we always update it as logged (logged_trans and last_log_commit
fields of the inode are updated). This is generally fine and avoids future
attempts to log it from having to do repeated work that
On 6.06.19 г. 13:07 ч., Johannes Thumshirn wrote:
> Nikolay reported the following KASAN splat when running btrfs/048:
>
> [ 1843.470920]
> ==
> [ 1843.471971] BUG: KASAN: slab-out-of-bounds in strncmp+0x66/0xb0
> [ 1843.472775] R
The current btrfs/048 test-case did not check the behavior of properties
with options like compression and with the compression level supplied.
Add test cases for compression with compression level as well so we can be
sure we don't regress there.
Signed-off-by: Johannes Thumshirn
Reviewed-by: F
On Thu, Jun 6, 2019 at 11:10 AM Johannes Thumshirn wrote:
>
> The current btrfs/048 test-case did not check the behavior of properties
> with options like compression and with the compression level supplied.
>
> Add test cases for compression with compression level as well so we can be
> sure we d
On Thu, Jun 6, 2019 at 8:56 AM Naohiro Aota wrote:
>
> Currently, btrfs does not consult seed devices to start readahead. As a
> result, if readahead zone is added to the seed devices, btrfs_reada_wait()
> indefinitely wait for the reada_ctl to finish.
>
> You can reproduce the hung by modifying b
From: Filipe Manana
Check that if we write some data to a file, its inode gets evicted (while
its parent directory's inode is not evicted due to being in use), then we
rename the file and fsync it, after a power failure the file data is not
lost.
This currently passes on xfs, ext4 and f2fs but f
Currently we are doing a pretty slow search for system chunks before
restoring real data.
The current behavior is to search all clusters for chunk tree root
first, then search all clusters again and again for every chunk tree
block.
This causes recursive calls and pretty slow start up, the only go
Introduce a new helper function, is_in_sys_chunks(), to determine if an
item is in the range of system chunks.
Since btrfs-image will merge adjacent same type extents into one item,
this function is designed to return true for any bytes in system chunk
range.
Signed-off-by: Qu Wenruo
---
image/
From: Filipe Manana
When we log an inode, regardless of logging it completely or only that it
exists, we always update it as logged (logged_trans and last_log_commit
fields of the inode are updated). This is generally fine and avoids future
attempts to log it from having to do repeated work that
Before this patch, we were using a very inefficient way to search
chunks:
We iterate through all clusters to find the chunk root tree block first,
then re-iterate all clusters again to find every child tree blocks.
Every time we need to iterate all clusters just to find a chunk tree
block.
This i
Signed-off-by: Qu Wenruo
---
image/main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/image/main.c b/image/main.c
index 4fba8283..fb9fc48c 100644
--- a/image/main.c
+++ b/image/main.c
@@ -2702,7 +2702,7 @@ int main(int argc, char *argv[])
create = 0
This patch will export disk-io.c::check_super() as btrfs_check_super()
and use it in btrfs-image for extra verification.
Signed-off-by: Qu Wenruo
---
disk-io.c| 6 +++---
disk-io.h| 1 +
image/main.c | 5 +
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/disk-io.c b/di
[BUG]
When there are over 32 (in my example, 35) online CPUs, btrfs-image -c9
will just hang.
[CAUSE]
Btrfs-image has a hard coded limit (32) on how many threads we can use.
For the "-t" option we do the up limit check.
But when we don't specify "-t" option and speicified "-c" option, then
btrfs-
Signed-off-by: Qu Wenruo
---
image/metadump.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/image/metadump.h b/image/metadump.h
index 8ace60f5..f85c9bcf 100644
--- a/image/metadump.h
+++ b/image/metadump.h
@@ -23,8 +23,8 @@
#include "ctree.h"
#define HEADER_MAGIC
This patchset can be fetched from github:
https://github.com/adam900710/btrfs-progs/tree/image_data_dump
Which is based on v5.1 tag.
This patchset contains the following main features:
- various small fixes for btrfs-image
From indent misalign, SZ_* cleanup to too many core cores causing
btrfs
The original dump format only contains a @magic member to verify the
format, this means if we want to introduce new on-disk format or change
certain size limit, we can only introduce new magic as kind of version.
This patch will introduce the framework to allow multiple magic to
co-exist for furth
This new data dump feature will dump the whole image, not long the
existing tree blocks but also all its data extents(*).
This feature will rely on the new dump format (_DUmP_v1), as it needs
extra large extent size limit, and older btrfs-image dump can't handle
such large item/cluster size.
Sinc
The current btrfs/048 test-case did not check the behavior of properties
with options like compression and with the compression level supplied.
Add test cases for compression with compression level as well so we can be
sure we don't regress there.
Signed-off-by: Johannes Thumshirn
---
tests/btr
Nikolay reported the following KASAN splat when running btrfs/048:
[ 1843.470920]
==
[ 1843.471971] BUG: KASAN: slab-out-of-bounds in strncmp+0x66/0xb0
[ 1843.472775] Read of size 1 at addr 888111e369e2 by task btrfs/3979
[ 1843
On Thu, Jun 06, 2019 at 08:43:34AM +, Naohiro Aota wrote:
[...]
> > +bool btrfs_compress_is_valid_type(const char *str, size_t len)
> > +{
> > + int i;
> > +
> > + for (i = 1; i < ARRAY_SIZE(btrfs_compress_types); i++) {
> > + size_t comp_len = strlen(btrfs_compress_types[i]);
> >
On 2019/06/06 17:01, Johannes Thumshirn wrote:
> Nikolay reported the following KASAN splat when running btrfs/048:
>
(snip)
>
> This is caused by supplying a too short compression value ('lz') in the
> test-case and comparing it to 'lzo' with strncmp() and a length of 3.
> strncmp() read past th
On 2019/06/06 17:05, Nikolay Borisov wrote:
>
>
> On 6.06.19 г. 10:49 ч., Naohiro Aota wrote:
>> xattr value is not NULL-terminated string. When you specify "lz" as the
>> property value, strncmp("lzo", value, 3) will try to read one byte after
>> the value buffer, causing the following OOB acces
On 6.06.19 г. 10:49 ч., Naohiro Aota wrote:
> xattr value is not NULL-terminated string. When you specify "lz" as the
> property value, strncmp("lzo", value, 3) will try to read one byte after
> the value buffer, causing the following OOB access. Fix this out-of-bound
> by explicitly check the r
On 6.06.19 г. 11:01 ч., Johannes Thumshirn wrote:
> Nikolay reported the following KASAN splat when running btrfs/048:
>
> [ 1843.470920]
> ==
> [ 1843.471971] BUG: KASAN: slab-out-of-bounds in strncmp+0x66/0xb0
> [ 1843.472775] R
Nikolay reported the following KASAN splat when running btrfs/048:
[ 1843.470920]
==
[ 1843.471971] BUG: KASAN: slab-out-of-bounds in strncmp+0x66/0xb0
[ 1843.472775] Read of size 1 at addr 888111e369e2 by task btrfs/3979
[ 1843
Currently, btrfs does not consult seed devices to start readahead. As a
result, if readahead zone is added to the seed devices, btrfs_reada_wait()
indefinitely wait for the reada_ctl to finish.
You can reproduce the hung by modifying btrfs/163 to have larger initial
file size (e.g. xfs_io pwrite 4
xattr value is not NULL-terminated string. When you specify "lz" as the
property value, strncmp("lzo", value, 3) will try to read one byte after
the value buffer, causing the following OOB access. Fix this out-of-bound
by explicitly check the required length.
[ 1519.998589]
==
42 matches
Mail list logo