From: Kent Overstreet
Btrfs has been doing bio splitting from btrfs_map_bio(), by checking
device limits as well as calling ->merge_bvec_fn() etc. That is not
necessary any more, because generic_make_request() is now able to
handle arbitrarily sized bios. So clean up unnecessary code paths.
Cc:
On Mon, Jun 01, 2015 at 08:52:48PM +0530, Chandan Rajendra wrote:
> In subpagesize-blocksize scenario, extent allocations for only some of the
> dirty blocks of a page can succeed, while allocation for rest of the blocks
> can fail. This patch allows I/O against such partially allocated ordered
> e
On Monday 06 Jul 2015 11:17:38 Liu Bo wrote:
> On Fri, Jul 03, 2015 at 03:38:00PM +0530, Chandan Rajendra wrote:
> > On Wednesday 01 Jul 2015 22:47:10 Liu Bo wrote:
> > > On Mon, Jun 01, 2015 at 08:52:47PM +0530, Chandan Rajendra wrote:
> > > > In subpagesize-blocksize scenario it is not sufficient
On 2015-07-03 13:51, Chris Murphy wrote:
On Fri, Jul 3, 2015 at 9:05 AM, Donald Pearson
wrote:
I did some more digging and found that I had a lot of errors basically
every drive.
Ick. Sucks for you but then makes this less of a Btrfs problem because
it can really only do so much if more than
From: Filipe Manana
Currently there is not way for a user to know what is the minimum size a
device of a btrfs filesystem can be resized to. Sometimes the value of
total allocated space (sum of all allocated chunks/device extents), which
can be parsed from 'btrfs filesystem show' and 'btrfs files
Signed-off-by: Geert Uytterhoeven
---
fs/btrfs/qgroup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index d5f1f033b7a00f3c..bf3c3fbed4b691f7 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -376,7 +376,7 @@ int btrfs_read_qgro
Cross-posting my unix.stackexchange.com question[1] to the btrfs list
(slightly modified):
[1]
https://unix.stackexchange.com/questions/214009/btrfs-distribute-files-equally-across-multiple-devices
-
I have a btrfs v
On Mon, 6 Jul 2015 18:22:52 +0200
Johannes Pfrang wrote:
> The simplest implementation would probably be something like: Always
> write files to the disk with the least amount of space used. I think
> this may be a valid software-raid use-case, as it combines RAID 0 (w/o
> some of the performance
Thanks for this Filipe,
On Fri, Jul 03, 2015 at 11:36:49AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> We were allocating memory with memdup_user() but we were never releasing
> that memory. This affected pretty much every call to the ioctl, whether
> it deduplicated extents or n
That looks quite interesting!
Unfortunately this removes the ability to specify different RAID-levels
for metadata vs data and actually behaves more like btrfs "single" mode.
According to your link it fills drive by drive instead of distributing
files equally across them:
"When you create a new fi
On Mon, Jul 06, 2015 at 06:22:52PM +0200, Johannes Pfrang wrote:
> Cross-posting my unix.stackexchange.com question[1] to the btrfs list
> (slightly modified):
>
> [1]
> https://unix.stackexchange.com/questions/214009/btrfs-distribute-files-equally-across-multiple-devices
>
>
Thank you. That's a very helpful explanation. I've just did balance
start -dconvert=single ;)
Fwiw, the best explanation about "single" I could find was in the
Glossary[1].
I don't have an account on the wiki, but your first paragraph would fit
great there!
[1] https://btrfs.wiki.kernel.org/index
After removing some of the snapshots that were received, the errors at
btrfs check went away.
Is there some list of features in btrfs which are considered stable?
Cause I though send/receive and the subvolumes would be, but apparently
this doesn't seem to be the case :-/
Cheers,
Chris.
smime.p
Hello,
I started with a raid1:
devid1 size 2.73TiB used 2.67TiB path /dev/sdd
devid2 size 2.73TiB used 2.67TiB path /dev/sdb
Then I added a third device, /dev/sdc1 and a balance
btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/__Complete_Disk/
Now the file-system
Hello,
ok, sdc seems to have failed (sorry, I checked only sdd and sdb SMART
values, as sdc is brand new. Maybe a bad assumption, from my side.
I have mounted the device
mount -o recovery,ro
So, what should I do now:
btrfs device delete /dev/sdc /mnt
or
mount -o degraded /dev/sdb /mnt
btrfs
On Mon, Jul 06, 2015 at 09:44:53PM +0200, Hendrik Friedel wrote:
> Hello,
>
> ok, sdc seems to have failed (sorry, I checked only sdd and sdb
> SMART values, as sdc is brand new. Maybe a bad assumption, from my
> side.
>
> I have mounted the device
> mount -o recovery,ro
>
> So, what should I do
Based on my experience Hugo's advice is critical, get the bad drive
out of the pool when in raid56 and do not try to replace or delete it
while it's still attached and recognized.
If you add a new device, mount degraded and rebalance. If you don't,
mount degraded then device delete missing.
On M
On 07/06/2015 01:01 PM, Donald Pearson wrote:
> Based on my experience Hugo's advice is critical, get the bad drive
> out of the pool when in raid56 and do not try to replace or delete it
> while it's still attached and recognized.
>
> If you add a new device, mount degraded and rebalance. If you
Hello,
oh dear, I fear I am in trouble:
recovery-mounted, I tried to save some data, but the system hung.
So I re-booted and sdc is now physically disconnected.
Label: none uuid: b4a6cce6-dc9c-4a13-80a4-ed6bc5b40bb8
Total devices 3 FS bytes used 4.67TiB
devid1 size 2.73TiB u
myth:~# btrfs check --repair /dev/mapper/crypt_sdd1
enabling repair mode
Checking filesystem on /dev/mapper/crypt_sdd1
UUID: 024ba4d0-dacb-438d-9f1b-eeb34083fe49
checking extents
cmds-check.c:4486: add_data_backref: Assertion `back->bytes != max_size` failed.
btrfs[0x8066a73]
btrfs[0x8066aa4]
btr
B.H.
Hello.
I have a btrfs volume which is used as a backup using rsync from the
main servers. It contains many duplicate files across different
subvolumes and i have some read only snapshots of each subvolume,
which are created every time after the backup completes.
I'm was trying to gain some
If you can mount it RO, first thing to do is back up any data that you
care about.
According to the bug that Omar posted you should not try a device
replace and you should not try a scrub with a missing device.
You may be able to just do a device delete missing, then separately do
a device add of
On Tue, Jul 07, 2015 at 12:54:01AM +0300, Mordechay Kaganer wrote:
> I have a btrfs volume which is used as a backup using rsync from the
> main servers. It contains many duplicate files across different
> subvolumes and i have some read only snapshots of each subvolume,
> which are created every t
Anything in dmesg?
On Mon, Jul 6, 2015 at 5:07 PM, hend...@friedels.name
wrote:
> Hallo,
>
> It seems, that mounting works, but the System locks completely soon after I
> backing up.
>
>
> Greetings,
>
> Hendrik
>
>
> -- Originalnachricht--
>
> Von: Donald Pearson
>
> Datum: Mo., 6. Juli
B.H.
On Tue, Jul 7, 2015 at 1:34 AM, Mark Fasheh wrote:
>>
>> It runs successfully for several hours and prints out many files which
>> are indeed duplicate like this:
>>
>> Showing 4 identical extents with id 5164bb47
>> Start Length Filename
>> 0.0 4.8M""
>> 0.0
On Tue, Jul 07, 2015 at 02:03:06AM +0300, Mordechay Kaganer wrote:
>
> Checked some more pairs, most extents appear as "shared". In some
> cases there is "last encoded" not shared extent with length 4096.
>
> Since i use snapshots, may shared also mean "shared between snapshots"?
Yes I forgot ab
Christoph Anton Mitterer posted on Mon, 06 Jul 2015 20:40:23 +0200 as
excerpted:
> After removing some of the snapshots that were received, the errors at
> btrfs check went away.
>
> Is there some list of features in btrfs which are considered stable?
> Cause I though send/receive and the subvolu
On Tue, 2015-07-07 at 00:47 +, Duncan wrote:
> The interaction between send/receive and subvolumes/snapshots
> is also a problem, but again, not so much on the subvolume/snapshot
> side, as on the send/receive side.
Well I haven't looked into any code, so the following is just
perception:
It
Christoph Anton Mitterer posted on Tue, 07 Jul 2015 03:03:25 +0200 as
excerpted:
> Well I haven't looked into any code, so the following is just
> perception: It seemed that send/receive itself has always worked
> correctly for me so far.
> I.e. I ran some complete diff -qr over the source and tar
Man manual need to be updated since RAID5/6 has been supported
by btrfs-replace.
Signed-off-by: Wang Yanfeng
---
Documentation/btrfs-replace.asciidoc | 5 -
1 file changed, 5 deletions(-)
diff --git a/Documentation/btrfs-replace.asciidoc
b/Documentation/btrfs-replace.asciidoc
index 774d850
Hello,
while mounting works with the recovery option, the system locks after
reading.
dmesg shows:
[ 684.258246] ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[ 684.258249] ata6.00: irq_stat 0x4001
[ 684.258252] ata6.00: failed command: DATA SET MANAGEMENT
[ 684.258255] ata6
31 matches
Mail list logo