Hi Mark,
Label: 'Root' uuid: d71404d4-468e-47d5-8f06-3b65fa7776aa
Total devices 2 FS bytes used 7.46GiB
devid1 size 9.31GiB used 8.06GiB path /dev/sdh6
devid3 size 9.31GiB used 8.06GiB path
/dev/disk/by-uuid/d71404d4-468e-47d5-8f06-3b65fa7776aa
I hope thi
Hi Xavier,
Thanks for the report.
I got this reproduced: its a very corner case, it depends on the
device path given in the subsequent subvol mounts, the fix appear
to be outside of this patch at this moment and I am digging to know
if we need to normalize the device path before using it i
Summary: When a btrfs subvolume is mounted with -o subvol, and a nested ro
subvol/snapshot is created, btrfs send returns with an error. If the top level
(id 5) is mounted instead, the send command succeeds.
3.17.0-0.rc4.git0.1.fc22.i686
Btrfs v3.16
This may also be happening on x86_64, and thi
Hi,
On standard ubuntu 14.04 a with an encrypted (cryptsetup) /home as brtfs
subvolume we have the following results:
3.17-rc2 : Ok.
3.17-rc3 and 3.17-rc4 : /home fails to mount on boot. If one try mount
-a then the system tells that the partition is already mounted according
to matab.
On
On 09/12/2014 03:18 PM, Josef Bacik wrote:
> One problem that has plagued us is that a user will use up all of his space
> with
> data, remove a bunch of that data, and then try to create a bunch of small
> files
> and run out of space. This happens because all the chunks were allocated for
>
One problem that has plagued us is that a user will use up all of his space with
data, remove a bunch of that data, and then try to create a bunch of small files
and run out of space. This happens because all the chunks were allocated for
data since the metadata requirements were so low. But now
Hi Linus,
My for-linus branch has some fixes for the next rc:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus
Filipe is doing a careful pass through fsync problems, and these are the
fixes so far. I'll have one more for rc6 that we're still testing.
My big commit
On 09/12/2014 06:43 AM, Miao Xie wrote:
> This patchset implement the data repair function for the direct read, it
> is implemented like buffered read:
> 1.When we find the data is not right, we try to read the data from the other
> mirror.
> 2.When the io on the mirror ends, we will insert the
Hello Guy,
Am Donnerstag, 4. September 2014, 11:50:14 schrieb Marc Dietrich:
> Am Donnerstag, 4. September 2014, 11:00:55 schrieb Gui Hecheng:
> > Hi Zooko, Marc,
> >
> > Firstly, thanks for your backtrace info, Marc.
> > Sorry to reply late, since I'm offline these days.
> > For the restore prob
Dear List,
I tried to remove a device from a 12 disk RAID10 array but it failed with a "no
space left" and the system crashed. After a reset i could only mount the array
in degraded mode because the device was marked as missing. I've tried a replace
command but it said that it does not support
shane-kernel posted on Fri, 12 Sep 2014 01:57:37 -0700 as excerpted:
[Last question first as it's easy to answer...]
> Finally for those using this sort of setup in production, is running
> btrfs on top of mdraid the way to go at this point?
While the latest kernel and btrfs-tools have removed t
On Fri, Sep 12, 2014 at 01:57:37AM -0700, shane-ker...@csy.ca wrote:
> Hi,
> I am testing BTRFS in a simple RAID1 environment. Default mount
> options and data and metadata are mirrored between sda2 and sdb2. I
> have a few questions and a potential bug report. I don't normally
> have console acce
We need real mirror number for RAID0/5/6 when reading data, or if read error
happens, we would pass 0 as the number of the mirror on which the io error
happens. It is wrong and would cause the filesystem read the data from the
corrupted mirror again.
Signed-off-by: Miao Xie
---
Changelog v1 -> v4
The current code would load checksum data for several times when we split
a whole direct read io because of the limit of the raid stripe, it would
make us search the csum tree for several times. In fact, it just wasted time,
and made the contention of the csum tree root be more serious. This patch
We could not use clean_io_failure in the direct IO path because it got the
filesystem information from the page structure, but the page in the direct
IO bio didn't have the filesystem information in its structure. So we need
modify it and pass all the information it need by parameters.
Signed-off-
After the data is written successfully, we should cleanup the read failure
record
in that range because
- If we set data COW for the file, the range that the failure record pointed to
is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error duri
The data repair function of direct read will be implemented later, and some code
in bio_readpage_error will be reused, so split bio_readpage_error into
several functions which will be used in direct read repair later.
Signed-off-by: Miao Xie
---
Changelog v1 -> v4:
- None
---
fs/btrfs/extent_io.
Signed-off-by: Miao Xie
---
Changelog v1 -> v4:
- None
---
fs/btrfs/inode.c | 102 +--
1 file changed, 47 insertions(+), 55 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index af304e1..e8139c6 100644
--- a/fs/btrfs/inode.c
+++ b
This patch implement data repair function when direct read fails.
The detail of the implementation is:
- When we find the data is not right, we try to read the data from the other
mirror.
- When the io on the mirror ends, we will insert the endio work into the
dedicated btrfs workqueue, not co
Direct IO splits the original bio to several sub-bios because of the limit of
raid stripe, and the filesystem will wait for all sub-bios and then run final
end io process.
But it was very hard to implement the data repair when dio read failure happens,
because at the final end io function, we didn
Signed-off-by: Miao Xie
---
Changelog v1 -> v4:
- None
---
fs/btrfs/extent_io.c | 26 ++
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index f8dda46..154cb8e 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/exte
We forgot to free failure record and bio after submitting re-read bio failed,
fix it.
Signed-off-by: Miao Xie
---
Changelog v1 -> v4:
- None
---
fs/btrfs/extent_io.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 92a6d9f..f8dda46 100644
This patchset implement the data repair function for the direct read, it
is implemented like buffered read:
1.When we find the data is not right, we try to read the data from the other
mirror.
2.When the io on the mirror ends, we will insert the endio work into the
dedicated btrfs workqueue, no
The original code of repair_io_failure was just used for buffered read,
because it got some filesystem data from page structure, it is safe for
the page in the page cache. But when we do a direct read, the pages in bio
are not in the page cache, that is there is no filesystem data in the page
struc
Hi,
I am testing BTRFS in a simple RAID1 environment. Default mount options and
data and metadata are mirrored between sda2 and sdb2. I have a few questions
and a potential bug report. I don't normally have console access to the server
so when the server boots with 1 of 2 disks, the mount will
On Fri, 2014-09-12 at 14:56 +0900, Satoru Takeuchi wrote:
> Hi Gui,
>
> (2014/09/12 10:15), Gui Hecheng wrote:
> > For btrfs fi show, -d|--all-devices & -m|--mounted will
> > overwrite each other, so if specified both, let the user
> > know that he should not use them at the same time.
> >
> > Si
26 matches
Mail list logo