Some unknown kernel bug makes inode nbytes modification out of sync with
file extent update.
But it's quite easy to fix in btrfs-progs anyway.
So just fix it by adding a new function repair_inode_nbytes by using the
found_size in inode_record.
Reported-by: Christian cdys...@gmail.com
Add repair function for I_ERR_FILE_WRONG_NBYTES and a test case for it.
Rebased to devel branch.
The second patch contains binary data, so created a pull request for it.
https://github.com/kdave/btrfs-progs/pull/7
Qu Wenruo (2):
btrfs-progs: fsck:Add repair function for
On Tue, Jun 30, 2015 at 10:26:18AM +0800, Qu Wenruo wrote:
To Chris:
Would you consider merging these patchset for late 4.2 merge window?
If it's OK to merge it into 4.2 late rc, we'll start our test and send pull
request after our test, eta this Friday or next Monday.
I know normally we
Hello,
At the bottom of this email are the results of the latest
chunk-recover. I only included one example of the output that was
printed prior to the summary information but it went up to the end of
my screen buffer and beyond.
So it looks like the command executed properly when none of the
On Wed, Apr 29, 2015 at 10:29:04AM +0800, Qu Wenruo wrote:
The new btrfs_qgroup_account_extents() function should be called in
btrfs_commit_transaction() and it will update all the qgroup according
to delayed_ref_root-dirty_extent_root.
The new function can handle both normal operation
Chris Mason wrote on 2015/07/02 08:42 -0400:
On Tue, Jun 30, 2015 at 10:26:18AM +0800, Qu Wenruo wrote:
To Chris:
Would you consider merging these patchset for late 4.2 merge window?
If it's OK to merge it into 4.2 late rc, we'll start our test and send pull
request after our test, eta this
The empty file is introduced as an careless 'git add', remove it.
Reported-by: David Sterba dste...@suse.cz
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
fs/btrfs/extent-tree.h | 0
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 fs/btrfs/extent-tree.h
diff --git
Have you seen this article?
I think the interesting part for you is the balance cannot run
because the filesystem is full heading.
http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
On Fri, Jul 3, 2015 at 12:32 AM, Rich Rauenzahn rraue...@gmail.com
Yes, I tried that -- and adding the loopback device.
# btrfs device add /dev/loop1 /
Performing full device TRIM (5.00GiB) ...
# btrfs fi show /
Label: 'centos7' uuid: 35f0ce3f-0902-47a3-8ad8-86179d1f3e3a
Total devices 3 FS bytes used 17.13GiB
devid1 size 111.11GiB used
David Sterba wrote on 2015/07/02 16:43 +0200:
On Wed, Apr 29, 2015 at 10:29:04AM +0800, Qu Wenruo wrote:
The new btrfs_qgroup_account_extents() function should be called in
btrfs_commit_transaction() and it will update all the qgroup according
to delayed_ref_root-dirty_extent_root.
The new
Running on CentOS7 ... / got full, I removed the files, but it still
thinks it is full. I've tried following the FAQ, even adding a
loopback device during the rebalance.
# btrfs fi show /
Label: 'centos7' uuid: 35f0ce3f-0902-47a3-8ad8-86179d1f3e3a
Total devices 2 FS bytes used 24.27GiB
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Which is curious because this is device id 2, where previously the
complaint was about device id 1. So can I believe dmesg about which
drive is actually the issue or is the drive that's printed in dmesg
just
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I do see plenty of complaints about the sdg drive (previously sde) in
/var/log/messages from the 28th which is when I started noticing
issues. Nothing is jumping out at me claiming the btrfs is taking
action but
Hi.
This is on a btrfs created and used with a 4.0 kernel.
Not much was done on it, apart from send/receive snapshots from another
btrfs (with -p).
Some of the older snapshots (that were used as parents before) have
been removed in the meantime).
Now a btrfs check gives this:
# btrfs check
I'll preface this with the fact that I'm just a user and am only posing a
question for a possible enhancement to btrfs.
I'm quite sure it isn't currently allowed but would it be possible to set a
failing device as a seed instead of kicking it out of a multi-device
filesystem? This would make
Unfortunately btrfs image fails with couldn't read chunk tree.
btrfs restore complains that every device is missing except the one
that you specify on executing the command. Multiple devices as a
parameter isn't an option. Specifcy /dev/disk/by-uuid/uuid claims
that all devices are missing.
I
On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Unfortunately btrfs image fails with couldn't read chunk tree.
btrfs restore complains that every device is missing except the one
that you specify on executing the command. Multiple devices as a
parameter isn't
I think it is. I have another raid5 pool that I've created to test
the restore function on, and it worked.
On Thu, Jul 2, 2015 at 1:26 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Unfortunately btrfs image
That is correct. I'm going to rebalance my raid5 pool as raid6 and
re-test just because.
On Thu, Jul 2, 2015 at 1:37 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I think it is. I have another raid5 pool that
Yes it works with raid6 as well.
[root@san01 btrfs-progs]# ./btrfs fi show
Label: 'rockstor_rockstor' uuid: 08d14b6f-18df-4b1b-a91e-4b33e7c90c29
Total devices 1 FS bytes used 19.25GiB
devid1 size 457.40GiB used 457.40GiB path /dev/sdt3
warning, device 4 is missing
warning,
On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I think it is. I have another raid5 pool that I've created to test
the restore function on, and it worked.
So you have all devices for this raid6 available, and yet when you use
restore, you get missing device
21 matches
Mail list logo