We have just encountered the same bug on 4.9.0-rc2. Any solution now?
> kernel BUG at fs/btrfs/ctree.c:3172!
> invalid opcode: [#1] PREEMPT SMP DEBUG_PAGEALLOC
> CPU: 0 PID: 22702 Comm: trinity-c40 Not tainted 4.9.0-rc4-think+ #1
> task: 8804ffde37c0 task.stack: c90002188000
> RIP:
On Wed, Nov 16, 2016 at 04:29:34PM -0800, Omar Sandoval wrote:
> From: Omar Sandoval
>
> There have been a couple of logic bugs in `btrfs_get_extent()` which
> could lead to spurious -EEXIST errors from read or write. This test
> exercises those conditions by having two threads
Option -f, -F and --sort don't work because a conditional expression
of ASSERT is wrong.
Signed-off-by: Tsutomu Itoh
---
qgroup.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/qgroup.c b/qgroup.c
index 9d10cb8..071d15e 100644
--- a/qgroup.c
+++
On Wed, Nov 16, 2016 at 11:24:33PM +0100, Niccolò Belli wrote:
> On martedì 15 novembre 2016 18:52:01 CET, Zygo Blaxell wrote:
> >Like I said, millions of extents per week...
> >
> >64K is an enormous dedup block size, especially if it comes with a 64K
> >alignment constraint as well.
> >
> >These
For fs support reflink, some of them (OK, btrfs again) doesn't split
SHARED flag for extent fiemap reporting.
For example:
0 4K 8K
/ File1: Extent 0 \
/\
|<- On disk Extent-->|
|/
| File2 /
Extent: 0
Fs supports explicit SHARED extent
At 11/17/2016 04:30 AM, Hans van Kranenburg wrote:
In the last two days I've added the --blockgroup option to btrfs heatmap
to let it create pictures of block group internals.
Examples and more instructions are to be found in the README at:
At 11/17/2016 05:12 AM, Dave Chinner wrote:
(Did you forget to cc fste...@vger.kernel.org?)
On Tue, Nov 15, 2016 at 04:13:32PM +0800, Qu Wenruo wrote:
Since btrfs always return the whole extent even part of it is shared
with other files, so the hole/extent counts differs for "file1" in this
On Thu, Nov 10, 2016 at 02:45:36PM -0800, Omar Sandoval wrote:
> On Thu, Nov 10, 2016 at 02:38:14PM -0800, Liu Bo wrote:
> > On Thu, Nov 10, 2016 at 12:24:13PM -0800, Omar Sandoval wrote:
> > > On Thu, Nov 10, 2016 at 12:09:06PM -0800, Omar Sandoval wrote:
> > > > On Thu, Nov 10, 2016 at
From: Omar Sandoval
There have been a couple of logic bugs in `btrfs_get_extent()` which
could lead to spurious -EEXIST errors from read or write. This test
exercises those conditions by having two threads race to add an extent
to the extent map.
This is fixed by Linux commit
On martedì 15 novembre 2016 18:52:01 CET, Zygo Blaxell wrote:
Like I said, millions of extents per week...
64K is an enormous dedup block size, especially if it comes with a 64K
alignment constraint as well.
These are the top ten duplicate block sizes from a sample of 95251
dedup ops on a
(Did you forget to cc fste...@vger.kernel.org?)
On Tue, Nov 15, 2016 at 04:13:32PM +0800, Qu Wenruo wrote:
> Since btrfs always return the whole extent even part of it is shared
> with other files, so the hole/extent counts differs for "file1" in this
> test case.
>
> For example:
>
> /--
From: Omar Sandoval
Also, the other progress messages go to stderr, so "checking extents"
probably should, as well.
Fixes: c7a1f66a205f ("btrfs-progs: check: switch some messages to common
helpers")
Signed-off-by: Omar Sandoval
---
As a side note, it seems
On 11/02/2016 05:13 PM, Piotr Pawłow wrote:
> On 02.11.2016 15:23, René Bühlmann wrote:
>> Origin: S2 S3
>>
>> USB: S1 S2
>>
>> SSH: S1
>>
>> Transferring S3 to USB is no problem as S2 is on both btrfs drives. But
>> how can I transfer S3 to SSH?
> If I understand correctly how send / receive
In the last two days I've added the --blockgroup option to btrfs heatmap
to let it create pictures of block group internals.
Examples and more instructions are to be found in the README at:
https://github.com/knorrie/btrfs-heatmap/blob/master/README.md
To use the new functionality it needs a
This updates generic/098 by adding a sync option, i.e. 'sync' after the second
write, and with btrfs's NO_HOLES, we could still get wrong isize after remount.
This gets fixed by the patch
'Btrfs: fix truncate down when no_holes feature is enabled'
Signed-off-by: Liu Bo
Клиентские базы Skype: prodawez390 Whatsapp: +79139230330 Viber: +79139230330
Telegram: +79139230330 Email: prodawez...@gmail.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Tue, Nov 15, 2016 at 02:53:12PM +0800, Eryu Guan wrote:
> On Fri, Nov 11, 2016 at 02:30:04PM -0800, Liu Bo wrote:
> > This updates generic/098 by adding a sync option, i.e. 'sync' after the
> > second
> > write, and with btrfs's NO_HOLES, we could still get wrong isize after
> > remount.
> >
Am Mittwoch, 16. November 2016, 07:57:08 CET schrieb Austin S. Hemmelgarn:
> On 2016-11-16 06:04, Martin Steigerwald wrote:
> > Am Mittwoch, 16. November 2016, 16:00:31 CET schrieb Roman Mamedov:
> >> On Wed, 16 Nov 2016 11:55:32 +0100
> >>
> >> Martin Steigerwald
On Mon, Oct 31, 2016 at 05:47:24PM +0100, David Sterba wrote:
> On Thu, Oct 27, 2016 at 08:52:33AM +0100, Domagoj Tršan wrote:
> > csum member of struct btrfs_super_block has array type of u8. It makes sense
> > that function btrfs_csum_final should be also declared to accept u8 *. I
> > changed
On Wed, Nov 16, 2016 at 01:52:07PM +0100, Christoph Hellwig wrote:
> this series has a few patches that switch btrfs to use the proper helpers for
> accessing bio internals. This helps to prepare for supporting multi-page
> bio_vecs, which are currently under development.
Looks good to me,
On Thu, Nov 10, 2016 at 03:17:41PM +0530, Shailendra Verma wrote:
> From: "Shailendra Verma"
>
> There is no need to call kfree() if memdup_user() fails, as no memory
> was allocated and the error in the error-valued pointer should be returned.
>
> Signed-off-by:
On Mon, Nov 14, 2016 at 09:55:34AM +0800, Qu Wenruo wrote:
> At 11/12/2016 04:22 AM, Liu Bo wrote:
> > On Tue, Oct 11, 2016 at 02:47:42PM +0800, Wang Xiaoguang wrote:
> >> If we use mount option "-o max_inline=sectorsize", say 4096, indeed
> >> even for a fresh fs, say nodesize is 16k, we can not
On 11/14/2016 06:11 PM, Liu Bo wrote:
On Mon, Nov 14, 2016 at 02:06:21PM -0500, Josef Bacik wrote:
In order to do hole punching we have a block reserve to hold the reservation we
need to drop the extents in our range. Since we could end up dropping a lot of
extents we set rsv->failfast so we
In order to do hole punching we have a block reserve to hold the reservation we
need to drop the extents in our range. Since we could end up dropping a lot of
extents we set rsv->failfast so we can just loop around again and drop the
remaining of the range. Unfortunately we unconditionally fill
System panic'd overnight running 4.9rc5 & rsync. Attached a photo of
the stack trace, and the 38 call traces in a 2 minute window shortly
before, to the bugzilla case for those not on it's e-mail list:
https://bugzilla.kernel.org/show_bug.cgi?id=186671
On Mon, Nov 14, 2016 at 3:56 PM, E V
On 2016-11-16 06:04, Martin Steigerwald wrote:
Am Mittwoch, 16. November 2016, 16:00:31 CET schrieb Roman Mamedov:
On Wed, 16 Nov 2016 11:55:32 +0100
Martin Steigerwald wrote:
I do think that above kernel messages invite such a kind of interpretation
tough. I
Instead of using bi_vcnt to calculate it.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/compression.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 12a631d..8618ac3 100644
---
Use the bvec offset and len members to prepare for multipage bvecs.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/compression.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 8618ac3..27e9feb
The number of pages in a bio is a bad indicatator for the number of
splits lower levels could do, and with the multipage bio_vec work even
that measure goes away and will become a number of segments of physically
contiguous areas instead. Check the total bio size vs the sector size
instead, which
Rework the loop a little bit to use the generic bio_for_each_segment_all
helper for iterating over the bio.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/file-item.c | 31 +++
1 file changed, 11 insertions(+), 20 deletions(-)
diff --git
And remove the bogus check for a NULL return value from kmap, which
can't happen. While we're at it: I don't think that kmapping up to 256
will work without deadlocks on highmem machines, a better idea would
be to use vm_map_ram to map all of them into a single virtual address
range.
Just use bio_for_each_segment_all to iterate over all segments.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/raid56.c | 16 ++--
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index d016d4a..da941fb 100644
---
Use bio_for_each_segment_all to iterate over the segments instead.
This requires a bit of reshuffling so that we only lookup up the ordered
item once inside the bio_for_each_segment_all loop.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/file-item.c | 21 ++---
1
Just use bio_for_each_segment_all to iterate over all segments.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/inode.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 147df4c..3f09cb6 100644
--- a/fs/btrfs/inode.c
Pass the full bio to the decompression routines and use bio iterators
to iterate over the data in the bio.
Signed-off-by: Christoph Hellwig
---
fs/btrfs/compression.c | 122 +
fs/btrfs/compression.h | 12 ++---
fs/btrfs/lzo.c
Hi all,
this series has a few patches that switch btrfs to use the proper helpers for
accessing bio internals. This helps to prepare for supporting multi-page
bio_vecs, which are currently under development.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
On 2016-11-16 05:55, Martin Steigerwald wrote:
Am Mittwoch, 16. November 2016, 15:43:36 CET schrieb Roman Mamedov:
On Wed, 16 Nov 2016 11:25:00 +0100
Martin Steigerwald wrote:
merkaba:~> mount -o degraded,clear_cache /dev/satafp1/backup /mnt/zeit
mount:
Am Mittwoch, 16. November 2016, 11:55:32 CET schrieben Sie:
> So mounting work although for some reason scrubbing is aborted (I had this
> issue a long time ago on my laptop as well). After removing /var/lib/btrfs
> scrub status file for the filesystem:
>
> merkaba:~> btrfs scrub start
Am Mittwoch, 16. November 2016, 16:00:31 CET schrieb Roman Mamedov:
> On Wed, 16 Nov 2016 11:55:32 +0100
>
> Martin Steigerwald wrote:
> > I do think that above kernel messages invite such a kind of interpretation
> > tough. I took the "BTRFS: open_ctree failed"
On Wed, 16 Nov 2016 11:55:32 +0100
Martin Steigerwald wrote:
> I do think that above kernel messages invite such a kind of interpretation
> tough. I took the "BTRFS: open_ctree failed" message as indicative to some
> structural issue with the filesystem.
For the
Am Mittwoch, 16. November 2016, 15:43:36 CET schrieb Roman Mamedov:
> On Wed, 16 Nov 2016 11:25:00 +0100
>
> Martin Steigerwald wrote:
> > merkaba:~> mount -o degraded,clear_cache /dev/satafp1/backup /mnt/zeit
> > mount: Falscher Dateisystemtyp, ungültige
On Wed, 16 Nov 2016 11:25:00 +0100
Martin Steigerwald wrote:
> merkaba:~> mount -o degraded,clear_cache /dev/satafp1/backup /mnt/zeit
> mount: Falscher Dateisystemtyp, ungültige Optionen, der
> Superblock von /dev/mapper/satafp1-backup ist beschädigt,
Hello!
A degraded BTRFS RAID 1 from one 3TB SATA HDD of my former workstation is not
mountable.
Debian 4.8 kernel + btrfs-tools 4.7.3.
A btrfs restore seems to work well enough, so on one hand there is no
urgency. But on the other hand I want to repurpose the harddisk and I
think I want to do
43 matches
Mail list logo