I found the following patch is insufficient.
===
commit 6e6b32ddf58db54f714d0f263c2589f4859e8b5e
Author: Adam Buchbinder
Date: Fri Jun 13 16:43:56 2014 -0700
btrfs-progs: Fix a use-after-free in the volumes code.
=
The reproducer is
$ mkfs.btrfs D1 D2 D3 -mraid5
$ mkfs.ext4 D2 && mkfs.ext4 D3
$ mount D1 /btrfs -odegraded
---
[ 87.672992] [ cut here ]
[ 87.673845] kernel BUG at fs/btrfs/raid56.c:1828!
...
[ 87.673845] RIP: 0010:[] []
__raid_recover_end_io+0x4a
Hi,
i want to expand a btrfs filesystem. In the moment I have four 2 TB HDDs
for one btrfs pool.
I want to replace two of the HDDs with 3 TB devices. For this reason i
added as first one 3 TB device. After that, i do the btrfs delete job.
This job lasts since 4 days and it seems, that it hangs.
I
On 06/22/2014 07:10 PM, Tamas Papp wrote:
On 06/20/2014 02:04 AM, George Mitchell wrote:
Hello Tamas,
I think it would help to provide more information than what you have
posted. "open_ctree" can cover a lot of territory.
1) I may be missing something, but I see no attachment. I am not
On Tue, Jun 24, 2014 at 3:34 AM, Satoru Takeuchi
wrote:
> Hi Filipe,
>
> (2014/06/23 20:58), Filipe David Borba Manana wrote:
>> Often when starting a transaction we commit the currently running
>> transaction,
>> which can end up writing block group caches when the current process has its
>> jou
On Tue, Jun 24, 2014 at 11:16:45AM +0800, Gui Hecheng wrote:
> When btrfs-image failed to create an image, the invalid output file
> had better be deleted to prevent being used mistakenly in the future.
>
> Signed-off-by: Gui Hecheng
> ---
> changelog
> v1->v2: use a new local variable to
Revert kernel commit 667e7d94a1683661cff5fe9a0fa0d7f8fdd2c007.
(Btrfs: allow superblock mismatch from older mkfs by Chris Mason)
Above commit will cause disaster if someone try to mount a newly created but
later corrupted btrfs filesystem.
And before btrfs entered mainline, btrfs-progs has alread
On Tue, Jun 24, 2014 at 08:12:30AM +0900, Satoru Takeuchi wrote:
> (2014/06/23 22:44), David Sterba wrote:
> >On Wed, Jun 18, 2014 at 03:01:32PM +0900, Satoru Takeuchi wrote:
> >>(2014/06/13 7:57), Adam Buchbinder wrote:
> >>>It's 32 bits as defined in ctree.h, but the struct had it as 64 bits.
> >
Hi Filipe,
(2014/06/24 17:29), Filipe David Manana wrote:
On Tue, Jun 24, 2014 at 3:34 AM, Satoru Takeuchi
wrote:
Hi Filipe,
(2014/06/23 20:58), Filipe David Borba Manana wrote:
Often when starting a transaction we commit the currently running transaction,
which can end up writing block grou
Chris Murphy posted on Mon, 23 Jun 2014 23:19:37 -0600 as excerpted:
>> I zeroed out the drive and ran every smartctl test on it I could find
>> and it never threw any more errors.
>
> Zeroing SSDs isn't a good way to do it. Use ATA Secure Erase instead.
> The drive is overprovisioned, so there a
CC Josef
On Mon, 23 Jun 2014 12:58:59 +0100, Filipe David Borba Manana wrote:
> Often when starting a transaction we commit the currently running transaction,
> which can end up writing block group caches when the current process has its
> journal_info set to NULL (and not to a transaction). This
On Tue, Jun 24, 2014 at 11:22 AM, Miao Xie wrote:
> CC Josef
>
> On Mon, 23 Jun 2014 12:58:59 +0100, Filipe David Borba Manana wrote:
>> Often when starting a transaction we commit the currently running
>> transaction,
>> which can end up writing block group caches when the current process has it
Dear btrfs-developers,
thank you for making such a nice and innovative filesystem. I do have a
small complaint however :-)
I read the documentation and liked the idea of having a multiple-device
filesystem with mirrored metadata while having data in "single" mode.
This would be perfect for m
On Tue, 24 Jun 2014 12:42:00 +0200
Gerald Hopf wrote:
> The "-d single" allocator is useless (or broken?).
It's just not designed with your use case in mind. It operates on the level of
allocation extents (if I'm not mistaken), not of whole files.
If you want to join multiple devices with a per
Gerald Hopf posted on Tue, 24 Jun 2014 12:42:00 +0200 as excerpted:
> After copying, I then unmounted the filesystem, switched off one of the
> two 3TB USB disks and mounted the remaining 3TB disk in recovery mode
> (-o degraded,ro) and proceeded to check whether any data was still left
> alive.
>
I do recall that the issue was more specifically happening with larger
files.
In another scenario i had two files, one small or even empty, and one
large that filled the
subvolume quota. If i recall correctly, I was able to remove the smaller
file without exceeding
the quota limit, and then remove
reproducer:
mkfs.btrfs -f -draid1 -mraid1 /dev/sdf /dev/sdd
modprobe -r btrfs && modprobe btrfs
mount -o degraded /dev/sdd /btrfs <-- calls add_missing_dev() to add missing
btrfs_device
umount /btrfs
btrfs dev ready /dev/sdd
echo $?
0
mount /dev/sdd /btrfs
mount: wrong fs type, bad option, bad s
Reproducer 1:
modprobe -r btrfs; modprobe btrfs
while true ; do mkfs.btrfs -f /dev/sde > /dev/null 2>&1; done
CTLR-C
we keep stale FSIDs.
btrfs-devlist | egrep "/dev/sde" | wc -l
41
Reproducer 2:
mkfs.btrfs -d raid1 -m raid1 /dev/sdd /dev/sdf
mkfs.btrfs -f /dev/sdf
btrfs dev ready /dev/sdd
echo
Hi together,
I wondered whether
$ sudo btrfs rescue chunk-recover -y /dev/loop2p1
btrfs: chunk-recover.c:124: process_extent_buffer: Assertion
`!(exist->nmirrors >= 2)' failed.
$ echo $?
134
is an error in btrfs or an error message of (the correctly working)
btrfs. Any ideas what
Checksums are applicable to sectorsize units. The current code uses
bio->bv_len units to compute and look up checksums. This works on machines
where sectorsize == PAGE_CACHE_SIZE. This patch makes the checksum
computation and look up code to work with sectorsize units.
Signed-off-by: Chandan Rajen
This commit brings back functions that set/clear EXTENT_WRITEBACK bits. These
are required to reliably clear PG_writeback page flag.
Signed-off-by: Chandan Rajendra
---
fs/btrfs/extent_io.c | 149 +++
fs/btrfs/extent_io.h | 2 +-
fs/btrfs/inode.c
Based on original patch from Aneesh Kumar K.V
bio_vec->{bv_offset, bv_len} cannot be relied upon by the end bio functions
to track the file offset range operated on by the bio. Hence this patch adds
two new members to 'struct btrfs_io_bio' to track the file offset range.
This patch also brings b
In the case of subpagesize-blocksize, this patch makes it possible to read
only a single metadata block from the disk instead of all the metadata blocks
that map into a page.
Signed-off-by: Chandan Rajendra
---
fs/btrfs/disk-io.c | 45 -
fs/btrfs/disk-io.h | 3 ++
fs/btrfs
This patchset continues with the work posted earlier at
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg34527.html.
Changes from V1:
1. Remove usage of bio_vec->bv_{len,offset} in end_bio_extent_readpage()
and end_bio_extent_writepage().
Changes from V2:
1. Get __extent_writepage()
The code now loops across 'ordered extents' instead of 'extent maps' to figure
out the dirty blocks of the page to be submitted for a write operation.
Signed-off-by: Chandan Rajendra
---
fs/btrfs/extent_io.c | 66 +---
1 file changed, 27 insertions
From: Chandra Seetharaman
In order to handle multiple extent buffers per page, first we need to create a
way to handle all the extent buffers that are attached to a page.
This patch creates a new data structure 'struct extent_buffer_head', and moves
fields that are common to all extent buffers i
Currently, the code reserves/releases extents in multiples of PAGE_CACHE_SIZE
units. Fix this.
Signed-off-by: Chandan Rajendra
---
fs/btrfs/file.c | 32
1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 006af2
For the subpagesize-blocksize scenario, This patch adds the ability to write a
single extent buffer to the disk.
Signed-off-by: Chandan Rajendra
---
fs/btrfs/disk-io.c | 20 ++--
fs/btrfs/extent_io.c | 279 ++-
2 files changed, 244 insertions(+)
From: Chandra Seetharaman
This patch allows mounting filesystems with blocksize smaller than the
PAGE_SIZE.
Signed-off-by: Chandra Seetharaman
Signed-off-by: Chandan Rajendra
---
fs/btrfs/disk-io.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.
Add compression `workspace' in free_workspace() to
`idle_workspace' list head, instead of tail. So we have
better chances to reuse most recently used `workspace'.
Signed-off-by: Sergey Senozhatsky
---
fs/btrfs/compression.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/b
Hello,
Whenever possible, find_workspace() returns the first idle workspace;
and free_workspace() puts workspace to idle list tail. Put workspace to
head of idle list instead. Hopefully, this will let to reuse most recently
used workspace and avoid workspace->mem, ->buf, ->cbuf address translation
On 06/24/2014 03:29 AM, Filipe David Manana wrote:
On Tue, Jun 24, 2014 at 11:22 AM, Miao Xie wrote:
CC Josef
On Mon, 23 Jun 2014 12:58:59 +0100, Filipe David Borba Manana wrote:
Often when starting a transaction we commit the currently running transaction,
which can end up writing block grou
On Jun 23, 2014, at 11:39 PM, Mike Hartman wrote:
>>
>> https://github.com/kdave/btrfs-progs.git integration-20140619
>
> Thanks. I pulled that version and retried everything in my original
> transcript, including the "btrfs check --init-csum-tree
> --init-extent-tree". Results are identical a
Often when starting a transaction we commit the currently running transaction,
which can end up writing block group caches when the current process has its
journal_info set to NULL (and not to a transaction). This makes our assertion
at btrfs_check_data_free_space() (current_journal != NULL) fail,
When starting a transaction just assert that current->journal_info
doesn't contain a send transaction stub, since send isn't supposed
to start transactions and when it finishes (either successfully or
not) it's supposed to set current->journal_info to NULL.
This is motivated by the change titled:
On Jun 23, 2014, at 11:49 PM, Mike Hartman wrote:
>>> I zeroed out the drive and ran every smartctl test on it I could find
>>> and it never threw any more errors.
>>
>> Zeroing SSDs isn't a good way to do it. Use ATA Secure Erase instead. The
>> drive is overprovisioned, so there are pages wi
On Jun 24, 2014, at 1:52 AM, Tamas Papp wrote:
>
> On 06/22/2014 07:10 PM, Tamas Papp wrote:
>>
>> On 06/20/2014 02:04 AM, George Mitchell wrote:
>>> Hello Tamas,
>>>
>>> I think it would help to provide more information than what you have
>>> posted. "open_ctree" can cover a lot of territo
On 24.06.2014 13:02, Roman Mamedov wrote:
If you want to join multiple devices with a per-file granularity (so
that a single file is wholely stored on one given device), check out
the FUSE filesystem called mhddfs; I wrote an article about it some
time ago: https://romanrm.net/mhddfs
Thank yo
On 24.06.2014 13:45, Duncan wrote:
- not a single one (!) of the big files (3GB-15GB) survived
A little familiarity with btrfs' chunk allocator and it's obvious what
happened. The critical point is that btrfs data chunks are 1 GiB in
size, so files over a GiB will require multiple data chunks.
On Tue, 2014-06-24 at 15:43 +0200, Karl-Philipp Richter wrote:
> Hi together,
> I wondered whether
>
> $ sudo btrfs rescue chunk-recover -y /dev/loop2p1
> btrfs: chunk-recover.c:124: process_extent_buffer: Assertion
> `!(exist->nmirrors >= 2)' failed.
> $ echo $?
> 134
Hi, Karl
Fo
On 6/11/14, 9:25 PM, Gui Hecheng wrote:
> When run chunk-recover on a health btrfs(data profile raid0, with
> plenty of data), the program has a chance to abort on the number
> of mirrors of an extent.
>
> According to the kernel code, the max mirror number of an extent
> is 3 not 2:
> ctree
> I somehow have doubts that a complex filesystem is the right project for
> me to start learning C, so I'll have to pass :-) No huge corporation
> with that itch behind me either, and I guess it will be more than a few
> hours for a btrfs programmer so no way I could sponsor that on my own.
Wheth
On 06/25/2014 10:17 AM, Eric Sandeen wrote:
On 6/11/14, 9:25 PM, Gui Hecheng wrote:
When run chunk-recover on a health btrfs(data profile raid0, with
plenty of data), the program has a chance to abort on the number
of mirrors of an extent.
According to the kernel code, the max mirror number of
On Tue, 2014-06-24 at 21:17 -0500, Eric Sandeen wrote:
> On 6/11/14, 9:25 PM, Gui Hecheng wrote:
> > When run chunk-recover on a health btrfs(data profile raid0, with
> > plenty of data), the program has a chance to abort on the number
> > of mirrors of an extent.
> >
> > According to the kernel c
On 6/24/14, 9:22 PM, Gui Hecheng wrote:
> On Tue, 2014-06-24 at 21:17 -0500, Eric Sandeen wrote:
>> On 6/11/14, 9:25 PM, Gui Hecheng wrote:
>>> When run chunk-recover on a health btrfs(data profile raid0, with
>>> plenty of data), the program has a chance to abort on the number
>>> of mirrors of an
On 6/25/14, 12:14 AM, Eric Sandeen wrote:
> On 6/24/14, 9:22 PM, Gui Hecheng wrote:
>> > On Tue, 2014-06-24 at 21:17 -0500, Eric Sandeen wrote:
>>> >> On 6/11/14, 9:25 PM, Gui Hecheng wrote:
>>> When run chunk-recover on a health btrfs(data profile raid0, with
>>> plenty of data), the pro
> Does this version's btrfs-image allow you to make an image of the file system?
Nope, same errors and no output.
> https://btrfs.wiki.kernel.org/index.php/Restore
>
> Your superblocks are good according to btrfs rescue super-recover. And
> various tree roots are found by btrfs-find-root includi
On Wed, 2014-06-25 at 00:25 -0500, Eric Sandeen wrote:
> On 6/25/14, 12:14 AM, Eric Sandeen wrote:
> > On 6/24/14, 9:22 PM, Gui Hecheng wrote:
> >> > On Tue, 2014-06-24 at 21:17 -0500, Eric Sandeen wrote:
> >>> >> On 6/11/14, 9:25 PM, Gui Hecheng wrote:
> >>> When run chunk-recover on a health
48 matches
Mail list logo