On Thursday, July 14, 2016 07:47:04 PM Chris Mason wrote:
> On 07/14/2016 07:31 PM, Omar Sandoval wrote:
> > From: Omar Sandoval
> >
> > So it turns out that the free space tree bitmap handling has always been
> > broken on big-endian systems. Totally my bad.
> >
> > Patch 1 fixes this. Technicall
On Fri, Jul 15, 2016 at 11:47:07AM +0530, Chandan Rajendra wrote:
> On Thursday, July 14, 2016 02:29:32 PM David Sterba wrote:
> > The calculation of extent_buffer::pages size was done for 4k PAGE_SIZE,
> > but this wastes 15 unused pointers on arches with large page size. Eg.
> > on ppc64 this giv
On Fri, Jul 15, 2016 at 10:22:52AM +0800, Qu Wenruo wrote:
>
>
> At 07/15/2016 09:40 AM, Liu Bo wrote:
> > I have a valid btrfs image which contains,
> > ...
> > item 10 key (1103101952 BLOCK_GROUP_ITEM 1288372224) itemoff 15947
> > itemsize 24
> > block group used 655360
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover from this?
Thanks to RAID1 of the metadata I can still access the data residing on the
remaining 5 disks after mounting ro,forc
Hey Qu, all
On 07/15/2016 05:56 AM, Qu Wenruo wrote:
>
> The good news is, we have patch to slightly speedup the mount, by
> avoiding reading out unrelated tree blocks.
>
> In our test environment, it takes 15% less time to mount a fs filled
> with 16K files(2T used space).
>
> https://patchwor
On Friday, July 15, 2016 11:44:06 AM David Sterba wrote:
> On Fri, Jul 15, 2016 at 11:47:07AM +0530, Chandan Rajendra wrote:
> > On Thursday, July 14, 2016 02:29:32 PM David Sterba wrote:
> > > The calculation of extent_buffer::pages size was done for 4k PAGE_SIZE,
> > > but this wastes 15 unused p
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover from this?
Thanks to RAID1 of the metadata I can still access the data residing on the
remai
On 07/15/2016 12:39 AM, Andrei Borzenkov wrote:
15.07.2016 00:20, Chris Mason пишет:
On 07/12/2016 05:50 PM, Goffredo Baroncelli wrote:
Hi All,
I developed a new btrfs command "btrfs insp phy"[1] to further
investigate this bug [2]. Using "btrfs insp phy" I developed a script
to trigger th
Signed-off-by: David Sterba
---
fs/btrfs/ctree.h | 2 ++
fs/btrfs/extent-tree.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 4274a7bfdaed..47ad088cfa00 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1179,8 +1179,10 @@ struct btrf
The mixed blockgroup reporting has been fixed by commit
ae02d1bd070767e109f4a6f1bb1f466e9698a355
"btrfs: fix mixed block count of available space"
Signed-off-by: David Sterba
---
fs/btrfs/super.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index 60e7
Add missing comparison to op in expression, which was forgotten when doing
the REQ_OP transition.
Fixes: b3d3fa519905 ("btrfs: update __btrfs_map_block for REQ_OP transition")
Signed-off-by: Vincent Stehlé
Cc: Mike Christie
Cc: Jens Axboe
---
Hi,
I saw that issue in linux next.
Not sure if
15.07.2016 16:20, Chris Mason пишет:
>>>
>>> Interesting, thanks for taking the time to write this up. Is the
>>> failure specific to scrub? Or is parity rebuild in general also failing
>>> in this case?
>>>
>>
>> How do you rebuild parity without scrub as long as all devices appear to
>> be pres
On 07/15/2016 11:10 AM, Andrei Borzenkov wrote:
15.07.2016 16:20, Chris Mason пишет:
Interesting, thanks for taking the time to write this up. Is the
failure specific to scrub? Or is parity rebuild in general also failing
in this case?
How do you rebuild parity without scrub as long as a
On Tue, Jul 12, 2016 at 11:24:21AM -0700, Liu Bo wrote:
> Mounting a btrfs can resume previous balance operations asynchronously.
> An user got a crash when one drive has some corrupt sectors.
>
> Since balance can cancel itself in case of any error, we can gracefully
> return errors to upper laye
On 2016-07-14 23:45, Chris Mason wrote:
>
>
> On 07/12/2016 05:40 PM, Goffredo Baroncelli wrote:
>> Hi All,
>>
>> the enclosed patch adds a new btrfs sub command: "btrfs inspect
>> physical-find". The aim of this new command is to show the physical
>> placement on the disk of a file. Currently i
On 2016-07-14 23:20, Chris Mason wrote:
>
>
> On 07/12/2016 05:50 PM, Goffredo Baroncelli wrote:
>> Hi All,
>>
>> I developed a new btrfs command "btrfs insp phy"[1] to further
>> investigate this bug [2]. Using "btrfs insp phy" I developed a
>> script to trigger the bug. The bug is not always t
On 2016-07-15 06:39, Andrei Borzenkov wrote:
> 15.07.2016 00:20, Chris Mason пишет:
>>
>>
>> On 07/12/2016 05:50 PM, Goffredo Baroncelli wrote:
>>> Hi All,
>>>
>>> I developed a new btrfs command "btrfs insp phy"[1] to further
>>> investigate this bug [2]. Using "btrfs insp phy" I developed a scrip
On 07/15/2016 12:28 PM, Goffredo Baroncelli wrote:
On 2016-07-14 23:20, Chris Mason wrote:
On 07/12/2016 05:50 PM, Goffredo Baroncelli wrote:
Hi All,
I developed a new btrfs command "btrfs insp phy"[1] to further
investigate this bug [2]. Using "btrfs insp phy" I developed a
script to trig
15.07.2016 19:29, Chris Mason пишет:
>
>> However I have to point out that this kind of test is very
>> difficult to do: the file-cache could lead to read an old data, so please
>> suggestion about how flush the cache are good (I do some sync,
>> unmount the filesystem and perform "echo 3 >/proc/sy
On Thu, Jul 14, 2016 at 11:16:47AM -0700, Omar Sandoval wrote:
> On Thu, Jul 14, 2016 at 02:12:58PM -0400, Chris Mason wrote:
> >
> >
> > On 07/14/2016 02:06 PM, Darrick J. Wong wrote:
> > > On Wed, Jul 13, 2016 at 03:19:38PM +0200, David Sterba wrote:
> > > > On Tue, Jul 12, 2016 at 10:26:43PM -
Hello,
I have a 5TB Seagate drive that uses SMR.
I was wondering, if BTRFS is usable with this Harddrive technology. So,
first I searched the BTRFS wiki -nothing. Then google.
* I found this: https://bbs.archlinux.org/viewtopic.php?id=203696
But this turned out to be an issue not related to B
> On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn wrote:
>
> On 2016-07-15 05:51, Matt wrote:
>> Hello
>>
>> I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
>> file system (see below). One of the 6 disk failed. What is the best way to
>> recover from this?
>>
> T
On 2016-07-15 14:45, Matt wrote:
On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn wrote:
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover fr
From: Omar Sandoval
Copy le_test_bit() from the kernel and use that for the free space tree
bitmaps.
Signed-off-by: Omar Sandoval
---
Same sort of mistake as in the kernel. Applies to v4.6.1.
extent_io.c | 2 +-
extent_io.h | 19 +++
kerncompat.h | 3 ++-
3 files changed,
On Fri, Jul 15, 2016 at 12:34:10PM +0530, Chandan Rajendra wrote:
> On Thursday, July 14, 2016 07:47:04 PM Chris Mason wrote:
> > On 07/14/2016 07:31 PM, Omar Sandoval wrote:
> > > From: Omar Sandoval
> > >
> > > So it turns out that the free space tree bitmap handling has always been
> > > broken
On 07/07/2016 06:24 AM, Gabriel C wrote:
Hi,
while running thunderbird on linux 4.6.3 and 4.7.0-rc6 ( didn't tested
other versions )
I trigger the following :
[ 6393.305675] WARNING: CPU: 6 PID: 5870 at fs/btrfs/inode.c:9306
btrfs_destroy_inode+0x22e/0x2a0 [btrfs]
Every time I've reproduce
On 07/15/2016 03:35 PM, Chris Mason wrote:
On 07/07/2016 06:24 AM, Gabriel C wrote:
Hi,
while running thunderbird on linux 4.6.3 and 4.7.0-rc6 ( didn't tested
other versions )
I trigger the following :
[ 6393.305675] WARNING: CPU: 6 PID: 5870 at fs/btrfs/inode.c:9306
btrfs_destroy_inode+0x2
Thou I’m not a hardcore storage system professional:
What disk are you using ? There are two types:
1. SMR managed by device firmware. BTRFS sees that as a normal block device …
problems you get are not related to BTRFS it self …
2. SMR managed by host system, BTRFS still does see this as a block
Hello all,
If I create three subvolumes like so:
# btrfs subvolume create a
# btrfs subvolume snapshot a b
# btrfs subvolume snapshot b c
I get a parent-child relationship which can be determined like so:
# btrfs subvolume list -uq /home/ |grep [abc]$
parent_uuid - uuid 0e5f473a-d9e5-144a-8f49-
Hello all,
We do btrfs subvolume snapshots over time for backups. I would like to
traverse the files in the subvolumes and find the total unique chunk count
to calculate total space for a set of subvolumes.
This sounds kind of like the beginning of what a deduplicator would do,
but I just wan
No answer here, but mate if you are involved in anything that will provide some
more automated backup tool for btrfs you got a lot of silent people rooting for
you.
> On 16 Jul 2016, at 00:21, Eric Wheeler wrote:
>
> Hello all,
>
> We do btrfs subvolume snapshots over time for backups. I wou
On Fri, Jul 15, 2016 at 04:21:31PM -0700, Eric Wheeler wrote:
> We do btrfs subvolume snapshots over time for backups. I would like to
> traverse the files in the subvolumes and find the total unique chunk count
> to calculate total space for a set of subvolumes.
>
> This sounds kind of like th
On Fri, Jul 15, 2016 at 04:21:31PM -0700, Eric Wheeler wrote:
> Hello all,
>
> We do btrfs subvolume snapshots over time for backups. I would like to
> traverse the files in the subvolumes and find the total unique chunk count
> to calculate total space for a set of subvolumes.
btrfs fi du
33 matches
Mail list logo