On 03/01/2018 12:42 PM, kbuild test robot wrote:
Hi Anand,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on btrfs/next]
[also build test ERROR on v4.16-rc3 next-20180228]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the
Hi Anand,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on btrfs/next]
[also build test ERROR on v4.16-rc3 next-20180228]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux
On 03/01/2018 10:47 AM, Qu Wenruo wrote:
Signed-off-by: Qu Wenruo
Looks good to me.
Reviewed-by: Su Yue
---
check/mode-lowmem.c | 8
1 file changed, 8 insertions(+)
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index 62bcf3d2e126..44c58163f8f7 100644
--- a/check/mode
On 03/01/2018 10:47 AM, Qu Wenruo wrote:
Kernel doesn't allow inline extent equal or larger than 4K.
And for inline extent larger than 4K, __btrfs_drop_extents() can return
-EOPNOTSUPP and cause unexpected error.
Check it in original mode.
Signed-off-by: Qu Wenruo
Looks good to me.
Review
Signed-off-by: Qu Wenruo
---
.../016-invalid-large-inline-extent/test.sh| 22 ++
1 file changed, 22 insertions(+)
create mode 100755 tests/convert-tests/016-invalid-large-inline-extent/test.sh
diff --git a/tests/convert-tests/016-invalid-large-inline-extent/test.sh
Signed-off-by: Qu Wenruo
---
check/mode-lowmem.c | 8
1 file changed, 8 insertions(+)
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index 62bcf3d2e126..44c58163f8f7 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -1417,6 +1417,7 @@ static int check_file_extent(str
Kernel doesn't support dropping range inside inline extent, and prevents
such thing happening by limiting max inline extent size to
min(max_inline, sectorsize - 1) in cow_file_range_inline().
However btrfs-progs only inherit the BTRFS_MAX_INLINE_DATA_SIZE() macro,
which doesn't have sectorsize che
Kernel doesn't allow inline extent equal or larger than 4K.
And for inline extent larger than 4K, __btrfs_drop_extents() can return
-EOPNOTSUPP and cause unexpected error.
Check it in original mode.
Signed-off-by: Qu Wenruo
---
check/main.c | 4
check/mode-original.h | 1 +
2 file
Kernel doesn't support to drop extent inside an inlined extent.
And kernel tends to limit inline extent just below sectorsize, so also
limit it in btrfs-progs.
This fixes unexpected -EOPNOTSUPP error from __btrfs_drop_extents() on
converted btrfs.
Fixes: 806528b8755f ("Add Yan Zheng's ext3->btrfs
On 2018年03月01日 02:36, Filipe Manana wrote:
> On Wed, Feb 28, 2018 at 5:50 PM, David Sterba wrote:
>> On Wed, Feb 28, 2018 at 05:43:40PM +0100, peteryuchu...@gmail.com wrote:
>>> On my laptop, which has just been switched to BTRFS, the root partition
>>> (a BTRFS partition inside an encrypted LVM
On 2018年02月28日 23:50, Christoph Anton Mitterer wrote:
> Hey Qu
>
> Thanks for still looking into this.
> I'm still in the recovery process (and there are other troubles at the
> university where I work, so everything will take me some time), but I
> have made a dd image of the broken fs, before
On Wed, Feb 28, 2018 at 04:06:40PM +, Filipe Manana wrote:
> On Thu, Jan 25, 2018 at 6:02 PM, Liu Bo wrote:
> > The highest objectid, which is assigned to new inode, is decided at
> > the time of initializing fs roots. However, in cases where log replay
> > gets processed, the btree which fs
On 2018-02-28 14:54, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 28 Feb 2018 14:24:40 -0500 as
excerpted:
I believe this effect is what Austin was referencing when he suggested
the defrag, tho defrag won't necessarily /entirely/ clear it up. One
way to be /sure/ it's cleared up would be
Austin S. Hemmelgarn posted on Wed, 28 Feb 2018 14:24:40 -0500 as
excerpted:
>> I believe this effect is what Austin was referencing when he suggested
>> the defrag, tho defrag won't necessarily /entirely/ clear it up. One
>> way to be /sure/ it's cleared up would be to rewrite the entire file,
>
On 2018-02-28 14:09, Duncan wrote:
vinayak hegde posted on Tue, 27 Feb 2018 18:39:51 +0530 as excerpted:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,disc
vinayak hegde posted on Tue, 27 Feb 2018 18:39:51 +0530 as excerpted:
> I am using btrfs, But I am seeing du -sh and df -h showing huge size
> difference on ssd.
>
> mount:
> /dev/drbd1 on /dc/fileunifier.datacache type btrfs
>
(rw,noatime,nodiratime,flushoncommit,discard,nospace_cache,recovery,
On Wed, 2018-02-28 at 18:36 +, Filipe Manana wrote:
> On Wed, Feb 28, 2018 at 5:50 PM, David Sterba
> wrote:
> > On Wed, Feb 28, 2018 at 05:43:40PM +0100, peteryuchu...@gmail.com
> > wrote:
> > > On my laptop, which has just been switched to BTRFS, the root
> > > partition
> > > (a BTRFS parti
On Wed, Feb 28, 2018 at 5:50 PM, David Sterba wrote:
> On Wed, Feb 28, 2018 at 05:43:40PM +0100, peteryuchu...@gmail.com wrote:
>> On my laptop, which has just been switched to BTRFS, the root partition
>> (a BTRFS partition inside an encrypted LVM. The drive is an NVMe) is
>> re-mounted as read-o
On Wed, Feb 28, 2018 at 05:43:40PM +0100, peteryuchu...@gmail.com wrote:
> On my laptop, which has just been switched to BTRFS, the root partition
> (a BTRFS partition inside an encrypted LVM. The drive is an NVMe) is
> re-mounted as read-only few minutes after boot.
>
> Trace:
By any chance, ar
On Wed, Feb 28, 2018 at 9:01 AM, vinayak hegde wrote:
> I ran full defragement and balance both, but didnt help.
Showing the same information immediately after full defragment would be helpful.
> My created and accounting usage files are matching the du -sh output.
> But I am not getting why btr
From: Filipe Manana
Test that when a fsync journal/log exists, if we rename a special file
(fifo, symbolic link or device), create a hard link for it with its old
name and then commit the journal/log, if a power loss happens the
filesystem will not fail to replay the journal/log when it is mounte
From: Filipe Manana
Test that if we have a file with two hard links in the same parent
directory, then remove of the links, create a new file in the same
parent directory and with the name of the link removed, fsync the new
file and have a power loss, mounting the filesystem succeeds.
This test
Hi,
On my laptop, which has just been switched to BTRFS, the root partition
(a BTRFS partition inside an encrypted LVM. The drive is an NVMe) is
re-mounted as read-only few minutes after boot.
Trace:
[ 199.974591] [ cut here ]
[ 199.974593] BTRFS: Transaction aborted (
On Thu, Jan 25, 2018 at 6:02 PM, Liu Bo wrote:
> The highest objectid, which is assigned to new inode, is decided at
> the time of initializing fs roots. However, in cases where log replay
> gets processed, the btree which fs root owns might be changed, so we
> have to search it again for the hig
From: Filipe Manana
If in the same transaction we rename a special file (fifo, character/block
device or symbolic link), create a hard link for it having its old name
then sync the log, we will end up with a log that can not be replayed and
at when attempting to replay it, an EEXIST error is retu
From: Filipe Manana
If we have a file with 2 (or more) hard links in the same directory,
remove one of the hard links, create a new file (or link an existing file)
in the same directory with the name of the removed hard link, and then
finally fsync the new file, we end up with a log that fails to
On Wed, Feb 28, 2018 at 2:26 PM, Shyam Prasad N wrote:
> Hi,
>
> Thanks for the reply.
>
>> * `df` calls `statvfs` to get it's data, which tries to count physical
>> allocation accounting for replication profiles. In other words, data in
>> chunks with the dup, raid1, and raid10 profiles gets cou
Hi,
Thanks for the reply.
> * `df` calls `statvfs` to get it's data, which tries to count physical
> allocation accounting for replication profiles. In other words, data in
> chunks with the dup, raid1, and raid10 profiles gets counted twice, data in
> raid5 and raid6 chunks gets counted with a
Add the testcase for false alert of data extent backref lost with the
extent offset.
The image can be reproduced by the following commands:
--
dev=~/test.img
mnt=/mnt/btrfs
umount $mnt &> /dev/null
fallocate -l 128M $dev
mkfs.btrfs $dev
mount $dev $mnt
for i in `seq 1 10`; do
xfs_io
Btrfs lowmem check reports the following false alert:
--
ERROR: file extent[267 2162688] root 256 owner 5 backref lost
--
The file extent is in the leaf which is shared by file tree 256 and fs
tree.
--
leaf 30605312 items 46 free space 4353 generation 7 owner 5
..
item 45 k
Instead of the disk_bytenr and disk_num_bytes of the extent_item which the
file extent references, we should output the objectid and offset of the
file extent. And the leaf may be shared by the file trees, we should print
the objectid of the root and the owner of the leaf.
Fixes: b0d360b541f0 ("bt
Hi Christoph,
Since I'm still digging the unexpected corruption (although without much
progress yet), would you please describe how the corruption happens?
In my current investigation, btrfs is indeed bullet-proof, (unlike my
original assumption) using newer dm-log-writes tool.
But free space ca
32 matches
Mail list logo