On Tue, Aug 25, 2015 at 02:00:32PM +0800, Qu Wenruo wrote:
> Thanks for all your work and patient Marc,
Haha, no problem, you're doing a lot more work than I am :)
> Good to know there is backup.
> But as there is no higher generation one, so I'd assume that's not a
> normal transaction id failu
We forgot free raid_map for raid56's map_bio.
This patch add it.
Signed-off-by: Zhao Lei
---
disk-io.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/disk-io.c b/disk-io.c
index fdcfd6d..46a5a46 100644
--- a/disk-io.c
+++ b/disk-io.c
@@ -428,6 +428,7 @@ int write_and_map_eb(struct btrfs_tra
Marc MERLIN wrote on 2015/08/24 22:28 -0700:
On Tue, Aug 25, 2015 at 10:51:00AM +0800, Qu Wenruo wrote:
Patches sent and CCed to you.
Please try the two patches and see what's new.
This time, I think the output will be much larger.
Indeed.
However the bad news is that gen 39538 is the high
On Tue, Aug 25, 2015 at 10:51:00AM +0800, Qu Wenruo wrote:
> Patches sent and CCed to you.
>
> Please try the two patches and see what's new.
> This time, I think the output will be much larger.
Indeed.
However the bad news is that gen 39538 is the highest.
Should I force btrfsck to work with an
From: Anand Jain
This test case tests if the device delete works with
the failed (EIO) source device. EIO errors are achieved
usign the DM device.
This test would need following btrfs-progs and btrfs
kernel patch
btrfs-progs: device delete to accept devid
Btrfs: device delete by devid
How
From: Anand Jain
This test case will test to confirm the replace works with
the failed (EIO) replacing source device. EIO condition is
achieved using the DM device.
Signed-off-by: Anand Jain
Reviewed-by: Filipe Manana
---
v7->v8:
. Inline with the patch 1/3 v8 changes, use dmerror_mount()
v6
From: Anand Jain
Controlled EIO from the device is achieved using the dm device.
Helper functions are at common/dmerror.
Broadly steps will include calling _dmerror_init().
_dmerror_init() will use SCRATCH_DEV to create dm linear device and assign
DMERROR_DEV to /dev/mapper/error-test.
When tes
Dave,
On 08/21/2015 05:45 AM, Dave Chinner wrote:
On Thu, Aug 20, 2015 at 03:09:05PM +0800, Anand Jain wrote:
(thanks for the off-ML emails from the people who helped me
to understand).
Dave,
looks like you are suggesting something like..
---
+_dmerror_mount_options()
+{
+
The patch 1/3 provides a framework to use dmerror device to
test volume manager operations.
Patch 2/3 adds a test case to test replacing a dmerror device
Patch 3/3 adds a test case to test deleting a dmerror device
This is v8 of this patch set.
In this version it mainly accepts Dave's latest revie
Marc MERLIN wrote on 2015/08/24 07:10 -0700:
On Mon, Aug 24, 2015 at 01:11:26PM +0800, Qu Wenruo wrote:
So, my last bet will be, using "btrfs-find-root -a" to find the root
with highest generation, and use the new root to exec "btrfsck -b
".
The latest btrfs-find-root would output possible
[Bug]
When given '-a' option, btrfs-find-root will output all possible tree
roots but the exact matched one.
[Reason]
Result printing skipes the exact match one, as it will normally be shown
before the alternative ones.
But when '-a' is given, that's not the case.
[Fix]
Just show the exact match
[BUG]
btrfs-find-root may not output desire result, as due to
search_extent_cache() may return result which doesn't cover the desired
range, generation cache can be screwed up if higher generation tree root
is found before lower generation tree root.
For example:
===
./btrfs-find-root /dev/sd
Marc MERLIN wrote on 2015/08/24 07:10 -0700:
On Mon, Aug 24, 2015 at 01:11:26PM +0800, Qu Wenruo wrote:
So, my last bet will be, using "btrfs-find-root -a" to find the root
with highest generation, and use the new root to exec "btrfsck -b
".
The latest btrfs-find-root would output possible
old_len is used to store the return value of btrfs_item_size_nr().
The return value of btrfs_item_size_nr() is of type u32.
To improve code correctness and avoid mixing signed and unsigned
integers I've changed old_len to be of type u32 as well.
Signed-off-by: Alexandru Moise <00moses.alexande...@
It is useless.
Signed-off-by: Zhao Lei
---
fs/btrfs/scrub.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index bdf44c9..daf75ac 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -2459,8 +2459,7 @@ static void scrub_block_com
We don't need pass so many arguments for recheck sblock now,
this patch cleans them.
Signed-off-by: Zhao Lei
---
fs/btrfs/scrub.c | 28
1 file changed, 8 insertions(+), 20 deletions(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 05dab4a..bdf44c9 100644
--
scrub_setup_recheck_block() isn't setup all necessary fields for
sblock_to_check because history reason.
So current code need more arguments in severial functions,
and more local variables, just to passing these lacked values to
necessary place.
This patch setup above fields to sblock_to_check in
We can use existing scrub_checksum_data() and scrub_checksum_tree_block()
for scrub_recheck_block_checksum(), instead of write duplicated code.
It is base on patch of:
setup all fields for sblock_to_check
which make this merge possible.
Signed-off-by: Zhao Lei
---
fs/btrfs/scrub.c | 126 +
We should reset sblock->xxx_error stats before calling
scrub_recheck_block_checksum().
Current code run correctly because all sblock are allocated by
k[cz]alloc(), and the error stats are not got changed.
Signed-off-by: Zhao Lei
---
fs/btrfs/scrub.c | 4
1 file changed, 4 insertions(+)
di
scrub_setup_recheck_block() isn't setup all necessary fields for
sblock_to_check because history reason.
So current code need more arguments in severial functions,
and more local variables, just to passing these lacked values to
necessary place.
[PATCH 1/5] setup above fields for sblock_to_check,
On Mon, Aug 24, 2015 at 01:11:26PM +0800, Qu Wenruo wrote:
> So, my last bet will be, using "btrfs-find-root -a" to find the root
> with highest generation, and use the new root to exec "btrfsck -b
> ".
> The latest btrfs-find-root would output possible tree root by
> descending order of its gene
FYI, unmounted the filesystem and it checked clean:
# btrfs check /dev/sdd
Checking filesystem on /dev/sdd
UUID: 550c0f77-8f75-40f3-a64e-42c87a0c8e8d
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 116808409085477 bytes used err is 0
total csum b
Hi Ted,
On Sat 15-08-15 09:54:22, Theodore Ts'o wrote:
> On Wed, Aug 12, 2015 at 11:14:11AM +0200, Michal Hocko wrote:
> > > Is this "if (!committed_data) {" check now dead code?
> > >
> > > I also see other similar suspected dead sites in the rest of the series.
> >
> > You are absolutely right
The following call trace is seen when generic/095 test is executed,
WARNING: CPU: 3 PID: 2769 at
/home/chandan/code/repos/linux/fs/btrfs/inode.c:8967
btrfs_destroy_inode+0x284/0x2a0()
Modules linked in:
CPU: 3 PID: 2769 Comm: umount Not tainted 4.2.0-rc5+ #31
Hardware name: QEMU Standard PC (i44
Hi Guys,
I am running a raid1 btrfs with 4 disks. One of my disks died the other day. So
I replaced it with a new one. After that I tried to delete the failed (now
missing) disk. This resulted in some but not much IO and some messages like
these:
kernel: BTRFS info (device sdd): found 9 extent
If more than one fs_devices in fs_uuids list(as mkfs.btrfs), we need
close them all before exit.
This function is for above propose.
Signed-off-by: Zhao Lei
---
volumes.c | 11 +++
volumes.h | 1 +
2 files changed, 12 insertions(+)
diff --git a/volumes.c b/volumes.c
index f7462c5..ca50
mkfs created more than one fs_devices in fs_uuids.
1: one is for file system been created
2: others are created in test_dev_for_mkfs for check mount point
test_dev_for_mkfs()-> ... -> btrfs_scan_one_device()
Current code only close above 1, and this patch close above 2.
Similar problem exist i
Ivan posted on Mon, 24 Aug 2015 11:52:08 +0800 as excerpted:
> I'm trying out RAID5 to understand its space usage. First off, I've 3
> devices of 2GB each, in RAID5. Old school RAID5 tells me I've 4GB of
> usable space. Actual fact: I've about 3.5GB, until it tells me I'm out
> of space. This is u
28 matches
Mail list logo