btrfs_release_extent_buffer_page() can't handle dummy extent that
allocated by btrfs_clone_extent_buffer() properly. That is because
reference count of pages that allocated by btrfs_clone_extent_buffer()
was 2, 1 by alloc_page(), and another by attach_extent_buffer_page().
Running following
Removing large amount of block group in a transaction may encounters
BUG_ON() in btrfs_orphan_add(). That is becuase btrfs_orphan_reserve_metadata()
will grab metadata reservation from transaction handle, and
btrfs_delete_unused_bgs() didn't reserve metadata for trnasaction handle when
delete
If device tree has hole, find_free_dev_extent() cannot find available
address properly.
The problem can be reproduce by following script.
mntpath=/btrfs
loopdev=/dev/loop0
filepath=/home/forrest/image
umount $mntpath
losetup -d $loopdev
truncate --size 100g $filepath
On Mon, 09 Feb 2015 10:26:33 -0500
Devon B. devo...@virtualcomplete.com wrote:
If you don't mind me asking, what version kernel are you running and are
you using any special mount options?
Well actually I did not claim I have working discard through 'loop', but your
post made me curious.
$
On Mon, 9 Feb 2015 20:42:56 +0500
Roman Mamedov r...@romanrm.net wrote:
On Mon, 09 Feb 2015 10:26:33 -0500
Devon B. devo...@virtualcomplete.com wrote:
If you don't mind me asking, what version kernel are you running and are
you using any special mount options?
Well actually I did not
Tobias Holst posted on Mon, 09 Feb 2015 23:45:21 +0100 as excerpted:
So a short summary:
- btrfs raid6 on 3.19.0 with btrfs-progs 3.19-rc2
- does not mount at boot up, open_ctree failed (disk 3)
- mounts successfully after bootup
- randomly checksum verify failed (disk 5)
- balance and
constantine posted on Tue, 10 Feb 2015 00:54:56 + as excerpted:
Could you please answer two questions?:
1. I am testing various files and all seem readable. Is there a way to
list every file that resides on a particular device (like /dev/sdc1?) so
as to check them?
I don't know of
Hi
I am just looking at the features enabled on my btrfs volume.
ls /sys/fs/btrfs/[UUID]/features/
shows the following output:
big_metadata compress_lzo extended_iref mixed_backref raid56
So big_metadata means I am not using skinny-metadata,
compress_lzo means I am using compression.
On Mon, Feb 9, 2015 at 12:21 PM, Filipe Manana fdman...@suse.com wrote:
There's a short time window where a race can happen between two or more
tasks that hold a transaction handle for the same transaction and where
one starts the transaction commit before the other tasks attempt to
split
We can have multiple fsync operations against the same file during the
same transaction and they can collect the same ordered extents while they
don't complete (still accessible from the inode's ordered tree). If this
happens, those ordered extents will never get their reference counts
decremented
For some reason we only allow btrfs-image restore to have one thread, which is
incredibly slow with large images. So allow us to do work with more than just
one thread. This made my restore go from 16 minutes to 3 minutes. Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
---
btrfs-image.c |
We hold a transaction open for the entirety of fixing extent refs. This works
out ok most of the time but we can be tight on space and run out of space when
fixing things. To get around this just push down the transaction starting dance
into the functions that actually fix things. This keeps us
We don't want to keep extent records pinned down if we fix stuff as we may need
the space and we can be pretty sure that these records are correct. Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
---
cmds-check.c | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git
Currently btrfs-debug-tree ignores the FULL_BACKREF flag which makes it hard to
figure out problems related to FULL_BACKREF. Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
---
print-tree.c | 4
1 file changed, 4 insertions(+)
diff --git a/print-tree.c b/print-tree.c
index
The METADUMP super flag makes us skip doing the chunk tree reading which isn't
helpful for the new restore since we have a valid chunk tree. But we still want
to have a way for the kernel to know that this is a metadump restore so it
doesn't do things like verify data checksums. We also want to
When we go to fixup the dev items after a restore we scan all existing devices.
If you happen to be a btrfs developer you could possibly open up some random
device that you didn't just restore onto, which gives you weird errors and makes
you super cranky and waste a day trying to figure out what
When we restore a multi disk image onto a single disk we need to update the dev
items used and total bytes so that fsck doesn't freak out and that we get normal
results from stuff like btrfs fi show. Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
---
btrfs-image.c | 150
We have logic to fix the root locations for roots in response to a corruption
bug we had earlier. However this work doesn't apply to reloc roots and can
screw things up worse, so make sure we skip any reloc roots that we find.
Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
---
cmds-check.c |
Hitting enospc problems with a really corrupt fs uncovered the fact that we
match any flag in a block group when creating space info's. This is a problem
if we have a raid level set, we'll end up with only one space info that covers
metadata and data because they share a raid level. We don't
The data reloc root is weird with it's csums. It'll copy an entire extent and
then log any csums it finds, which makes it look weird when it comes to prealloc
extents. So just skip the data reloc tree, it's special and we just don't need
to worry about it. Thanks,
Signed-off-by: Josef Bacik
P. Remek p.rem...@googlemail.com schrieb:
Hello,
I am benchmarking Btrfs and when benchmarking random writes with fio
utility, I noticed following two things:
1) On first run when target file doesn't exist yet, perfromance is
about 8000 IOPs. On second, and every other run, performance
From: Anand Jain anand.j...@oracle.com
Theoritically need to remove the device links attributes, but since its entire
device
kobject was removed, so there wasn't any issue of about it. Just do it nicely.
Signed-off-by: Anand Jain anand.j...@oracle.com
---
fs/btrfs/sysfs.c | 17
From: Anand Jain anand.j...@oracle.com
As of now the order in which the kobjects are created
at btrfs_sysfs_add_one() is..
fsid
features
unknown features (dynamic features)
devices.
Since we would move fsid and device kobject to fs_devices
from fs_info structure, this patch will reorder in
From: Anand Jain anand.j...@oracle.com
Since the failure code in the btrfs_sysfs_add_one() can
call btrfs_sysfs_remove_one() even before device_dir_kobj
has been created we need to check if its null.
Signed-off-by: Anand Jain anand.j...@oracle.com
---
fs/btrfs/sysfs.c | 10 ++
1 file
On Mon, Feb 9, 2015 at 5:54 PM, constantine costas.magn...@gmail.com wrote:
1. I am testing various files and all seem readable. Is there a way
to list every file that resides on a particular device (like
/dev/sdc1?) so as to check them? There are a handful of files that
seem corrupted,
On Mon, 09 Feb 2015 12:07:18 -0500
Devon B. devo...@virtualcomplete.com wrote:
Thanks for your testing. I haven't tried 3.14. I tried on CentOS 6 box
(2.6.32 - which is experimental) and Ubuntu 14.04 (3.13) and neither
worked. So the question remains, what is the difference? Possibly a
Hello,
I am benchmarking Btrfs and when benchmarking random writes with fio
utility, I noticed following two things:
1) On first run when target file doesn't exist yet, perfromance is
about 8000 IOPs. On second, and every other run, performance goes up
to 7 IOPs. Its massive difference. The
Thanks for your testing. I haven't tried 3.14. I tried on CentOS 6 box
(2.6.32 - which is experimental) and Ubuntu 14.04 (3.13) and neither
worked. So the question remains, what is the difference? Possibly a
small difference between the 3.13 and 3.14 kernels, I don't think it is
any of
Thank you everybody for your support, care, cheerful comments and
understandable criticism. I am in the process of backing up every
file.
Could you please answer two questions?:
1. I am testing various files and all seem readable. Is there a way
to list every file that resides on a particular
My previous patch Btrfs: fix scrub race leading to use-after-free
introduced the possibility to sleep in an atomic context, which happens
when the scrub_lock mutex is held at the time scrub_pending_bio_dec()
is called - this function can be called under an atomic context.
Chris ran into this in a
Brendan Hide bren...@swiftspirit.co.za schrieb:
I have the following two lines in
/etc/udev/rules.d/61-persistent-storage.rules for two old 250GB
spindles. It sets the timeout to 120 seconds because these two disks
don't support SCT ERC. This may very well apply without modification to
other
Здравствуйте
Просьба выполнена посылаю предложение
praic-list.doc
Description: MS-Word document
How does btrfs raid5 handle mixed-size disks? The docs weren't
terribly clear on this.
Suppose I have 4x3TB and 1x1TB disks. Using conventional lvm+mdadm in
raid5 mode I'd expect to be able to fit about 10TB of space on those
(2TB striped across 4 disks plus 1TB striped across 5 disks after
Not sure if it helps, but here is it:
root@lab1:/mnt/vol1# btrfs filesystem df /mnt/vol1/
Data, RAID10: total=116.00GiB, used=110.03GiB
Data, single: total=8.00MiB, used=0.00
System, RAID1: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1: total=2.00GiB,
On Tue, Jan 27, 2015 at 11:11 AM, Filipe Manana fdman...@suse.com
wrote:
While running a scrub on a kernel with CONFIG_DEBUG_PAGEALLOC=y, I got
the following trace:
This actually trades one bug for another:
[ 1928.950319] BUG: sleeping function called from invalid context at
This series of patches fixes up btrfsck in lots of ways and adds some new
functionality. These patches were required to fix Hugo's broken multi-disk fs
as well as fix fsck so it would actually pass all of the fsck tests. This also
fixes a long standing btrfs-image problem where it wouldn't
If we fix bad blocks during run_next_block we will return -EAGAIN to loop around
and start again. The deal_with_roots work messed up this handling, this patch
fixes it. With this patch we can properly deal with broken tree blocks.
Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
---
From: Zhao Lei zhao...@cn.fujitsu.com
There functions include unused chunk_tree argument from the begining,
it is time to remove them and clean up relative code to prepare value
of this argument in caller.
Signed-off-by: Zhao Lei zhao...@cn.fujitsu.com
---
fs/btrfs/volumes.c | 20
There's a short time window where a race can happen between two or more
tasks that hold a transaction handle for the same transaction and where
one starts the transaction commit before the other tasks attempt to
split their pending ordered extents list into the transaction's pending
ordered
Hi
I'm having some trouble with my six-drives btrfs raid6 (each drive
encrypted with LUKS). At first: Yes, I do have backups, but it may
take at least days, maybe weeks or even some month to restore
everything from the (offside) backups. So it is not essential to
recover the data, but would be
On Mon, Feb 09, 2015 at 05:24:42PM -0500, Rich Freeman wrote:
How does btrfs raid5 handle mixed-size disks? The docs weren't
terribly clear on this.
Suppose I have 4x3TB and 1x1TB disks. Using conventional lvm+mdadm in
raid5 mode I'd expect to be able to fit about 10TB of space on those
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2015/02/09 10:30 PM, Kai Krakow wrote:
Brendan Hide bren...@swiftspirit.co.za schrieb:
I have the following two lines in
/etc/udev/rules.d/61-persistent-storage.rules for two old 250GB
[snip]
Wouldn't it be easier and more efficient to use
P. Remek posted on Mon, 09 Feb 2015 18:26:49 +0100 as excerpted:
Hello,
I am benchmarking Btrfs and when benchmarking random writes with fio
utility, I noticed following two things:
1) On first run when target file doesn't exist yet, perfromance is about
8000 IOPs. On second, and every
43 matches
Mail list logo