On Sat, Aug 11, 2018 at 9:36 PM Qu Wenruo wrote:
> > I'll add a new rescue subcommand, 'btrfs rescue disable-quota' for you
> > to disable quota offline.
>
> Patch set (from my work mailbox), titled "[PATCH] btrfs-progs: rescue:
> Add ability to disable quota offline".
> Can also be fetched from
ee block switch.
>
> So add an offline rescue tool to disable quota.
>
> Reported-by: Dan Merillat
> Signed-off-by: Qu Wenruo
That fixed it, thanks.
Tested-By: Dan Merillat
On Sat, Aug 11, 2018 at 8:30 PM Qu Wenruo wrote:
>
> It looks pretty like qgroup, but too many noise.
> The pin point trace event would btrfs_find_all_roots().
I had this half-written when you replied.
Agreed: looks like bulk of time spent resides in qgroups. Spent some
time with sysrq-l and ft
19 hours later, still going extremely slowly and taking longer and
longer for progress made. Main symptom is the mount process is
spinning at 100% CPU, interspersed with btrfs-transaction spinning at
100% CPU.
So far it's racked up 14h45m of CPU time on mount and an additional
3h40m on btrfs-trans
On Fri, Aug 10, 2018 at 6:51 AM, Qu Wenruo wrote:
>
>
> On 8/10/18 6:42 PM, Dan Merillat wrote:
>> On Fri, Aug 10, 2018 at 6:05 AM, Qu Wenruo wrote:
>
> But considering your amount of block groups, mount itself may take some
> time (before trying to resume balance).
On Fri, Aug 10, 2018 at 6:05 AM, Qu Wenruo wrote:
>
> Although not sure about the details, but the fs looks pretty huge.
> Tons of subvolume and its free space cache inodes.
11TB, 3 or so subvolumes and two snapshots I think. Not particularly
large for NAS.
> But only 3 tree reloc trees, unles
E: Resending without the 500k attachment.
On Fri, Aug 10, 2018 at 5:13 AM, Qu Wenruo wrote:
>
>
> On 8/10/18 4:47 PM, Dan Merillat wrote:
>> Unfortunately that doesn't appear to be it, a forced restart and
>> attempted to mount with skip_balance leads to the same
if it may make
progress, but if not I'd like to start on other options.
On Fri, Aug 10, 2018 at 3:59 AM, Qu Wenruo wrote:
>
>
> On 8/10/18 3:40 PM, Dan Merillat wrote:
>> Kernel 4.17.9, 11tb BTRFS device (md-backed, not btrfs raid)
>>
>> I was testing something out
:
On Fri, Aug 10, 2018 at 3:40 AM, Dan Merillat wrote:
> Kernel 4.17.9, 11tb BTRFS device (md-backed, not btrfs raid)
>
> I was testing something out and enabled quota groups and started getting
> 2-5 minute long pauses where a btrfs-transaction thread spun at 100%.
>
> Pos
Kernel 4.17.9, 11tb BTRFS device (md-backed, not btrfs raid)
I was testing something out and enabled quota groups and started getting
2-5 minute long pauses where a btrfs-transaction thread spun at 100%.
Post-reboot the mount process spinds at 100% CPU, occasinally yielding
to a btrfs-transaction
On Fri, Sep 1, 2017 at 11:20 AM, Austin S. Hemmelgarn
wrote:
> No, that's not what I'm talking about. You always get one bcache device per
> backing device, but multiple bcache devices can use the same physical cache
> device (that is, backing devices map 1:1 to bcache devices, but cache
> device
I tried out -next to test the mm fixes, and immediately upon mounting my
array (11TB, 98% full at the time) the btrfs-transaction thread for it
spun at 100% CPU.
It acted like read-only, write-discarding media - deleted files
reappeared after a reboot every time. I'm not sure about writes, since
Sand & receive from the same machine, from a read-only mount to a
freshly formatted fs. Aside from the warning everything appears to
be working correctly, but since this is the latest btrfs code it
needed reporting.
I'll probably have another opportunity to test this again, since I'm
blowing up
;s the best way, so if anyone else has
ideas let me know.
On Fri, Apr 24, 2015 at 11:24 AM, David Sterba wrote:
> On Thu, Apr 23, 2015 at 12:51:33PM -0400, Dan Merillat wrote:
>> +/* returns:
>> + * 0 if the file exists and should be skipped.
>> + * 1 if the file does NOT ex
I won't be needing btrfs restore for a few more years!
On Fri, Apr 24, 2015 at 12:38 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Dan Merillat posted on Thu, 23 Apr 2015 12:47:29 -0400 as excerpted:
>
>> Hopefully this is sufficiently paranoid, tested with PATH_MAX length
>>
Restore symlinks, optionally with owner/times.
Signed-off-by: Dan Merillat
---
Documentation/btrfs-restore.asciidoc | 3 +
cmds-restore.c | 140 ++-
2 files changed, 140 insertions(+), 3 deletions(-)
diff --git a/Documentation/btrfs
Symlink restore needs this, but the cut&paste became
too complicated. Simplify everything.
Signed-off-by: Dan Merillat
---
cmds-restore.c | 53 ++---
1 file changed, 34 insertions(+), 19 deletions(-)
diff --git a/cmds-restore.c b/cmds-resto
This was lost in the cleanup of 71a559
Signed-off-by: Dan Merillat
---
Documentation/btrfs-restore.asciidoc | 3 +++
1 file changed, 3 insertions(+)
diff --git a/Documentation/btrfs-restore.asciidoc
b/Documentation/btrfs-restore.asciidoc
index 20fc366..89e0c87 100644
--- a/Documentation/btrfs
Hopefully this is sufficiently paranoid, tested with PATH_MAX length
symlinks, existing files, insufficient permissions, dangling symlinks.
I think I got the coding style correct this time, I'll fix and resend if
not.
Includes a trivial fix from my metadata patch, the documentation got
lost in th
On Wed, Apr 22, 2015 at 12:53 PM, David Sterba wrote:
> Applied, thanks.
>
> In future patches, please stick to the coding style used in progs ([1]),
> I've fixed spacing around "=", comments and moved declarations before
> the statements.
>
> [1] https://www.kernel.org/doc/Documentation/CodingSty
As long as the inode is intact, the file metadata can be restored.
Directory data is restored at the end of search_dir. Errors are
checked and returned, unless ignore_errors is requested.
Signed-off-by: Dan Merillat
---
Documentation/btrfs-restore.txt | 3 ++
cmds-restore.c
Changes since v1:
* Documented in the manpage
* Added to usage() for btrfs restore
* Made it an optional flag (-m/--restore-metadata)
* Use endian-safe macros to access the on-disk data.
* Restore the proper mtime instead of atime twice.
* Restore owner and mode
* Restore metadata for directories
On Fri, Apr 17, 2015 at 7:54 AM, Noah Massey wrote:
> On Thu, Apr 16, 2015 at 7:33 PM, Dan Merillat wrote:
>> The inode is already found, use the data and make restore friendlier.
>>
>> Signed-off-by: Dan Merillat
>> ---
>> cmds-restore.c | 12
That's not a bad idea. In my case it was all owned by the same user
(media storage) so the only thing of interest was the timestamps.
I can whip up a patch to do that as well.
On Thu, Apr 16, 2015 at 9:09 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Dan Merillat posted on Thu, 16
I think thunderbird ate that patch, sorry.
I didn't make it conditional - there's really no reason to not restore
the information. I was actually surprised that it didn't restore
before this patch.
If it looks good I'll resend without the word-wrapping.
--
To unsubscribe from this list: send the
The inode is already found, use the data and make restore friendlier.
Signed-off-by: Dan Merillat
---
cmds-restore.c | 12
1 file changed, 12 insertions(+)
diff --git a/cmds-restore.c b/cmds-restore.c
index d2fc951..95ac487 100644
--- a/cmds-restore.c
+++ b/cmds-restore.c
On Tue, Apr 7, 2015 at 11:40 PM, Dan Merillat wrote:
> Bcache failures are nasty, because they leave a mix of old and new
> data on the disk. In this case, there was very little dirty data, but
> of course the tree roots were dirty and out-of-sync.
>
> fileserver:/usr/src/btrfs
, Dan Merillat wrote:
> It's a known bug with bcache and enabling discard, it was discarding
> sections containing data it wanted. After a reboot bcache refused to
> accept the cache data, and of course it was dirty because I'm frankly
> too stupid to breathe sometimes.
>
fs or possibly (in my case) an
> issue with the previous SSD.
>
> Did you encounter this same error?
>
> With my 2 most recent crashes, I didn't try to recover very hard (or even
> try 'btrfs recover; at all) as I've been taking daily backups. I did try
> btrfsck,
bably recover nearly everything.
At worst, is there a way to scan the metadata blocks and rebuild from
found extent-trees?
On Tue, Apr 7, 2015 at 11:40 PM, Dan Merillat wrote:
> Bcache failures are nasty, because they leave a mix of old and new
> data on the disk. In this case, there was
Bcache failures are nasty, because they leave a mix of old and new
data on the disk. In this case, there was very little dirty data, but
of course the tree roots were dirty and out-of-sync.
fileserver:/usr/src/btrfs-progs# ./btrfs --version
Btrfs v3.18.2
kernel version 3.18
[ 572.573566] BTRFS
On Thu, Oct 30, 2014 at 3:50 AM, Koen Kooi wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Dan Merillat schreef op 30-10-14 04:17:
>> It's specifically BTRFS related, I was able to reproduce it on a bare
>> drive (no lvm, no md, no bcache). It
n't have that
> issue. I was already planning to because of the read-only snapshots issue.
>
> Thank you and good luck debugging!
>
> On 29-10-2014 21:50, Dan Merillat wrote:
>> I'm in the middle of debugging the exact same thing. 3.17.0 -
>> rtorrent dies with
fd, location);
printf("%d: writing at %04zd mb\n", i, location);
memset(map, 0x5a, 1 * MB);
msync(map, 1*MB, MS_ASYNC);
munmap(map, MB);
}
}
On Wed, Oct 29, 2014 at 5:50 PM, Dan Merillat wrot
I'm in the middle of debugging the exact same thing. 3.17.0 -
rtorrent dies with SIGBUS.
I've done some debugging, the sequence is something like this:
open a new file
fallocate() to the final size
mmap() all (or a portion) of the file
write to the region
run SHA1 on that mmap'd region to validat
On Wed, Sep 24, 2014 at 6:23 PM, Holger Hoffstätte
wrote:
>> Basically it's been data allocation happy, since I haven't deleted
>> 53GB at any point. Unfortunately, none of the chunks are at 0% usage
>> so a balance -dusage=0 finds nothing to drop.
>
> Also try -musage=0..10, just for fun.
Trie
Any idea how to recover? I can't cut-paste but it's
Total devices 1 FS bytes used 176.22GiB
size 233.59GiB used 233.59GiB
Basically it's been data allocation happy, since I haven't deleted
53GB at any point. Unfortunately, none of the chunks are at 0% usage
so a balance -dusage=0 finds nothing t
I'm trying to track this down - this started happening without changing the
kernel in use, so probably
a corrupted filesystem. The symptoms are that all memory is suddenly used by no
apparent source. OOM
killer is invoked on every task, still can't free up enough memory to continue.
When it goe
On Fri, Apr 5, 2013 at 7:43 PM, Dan Merillat wrote:
>
> first off: this was just junk data, and is all readable in degraded
> mode anyway.
>
> Label: 'ROOT' uuid: cc80d150-af98-4af4-bc68-c8df352bda4f
> Total devices 2 FS bytes used 138.00GB
>
first off: this was just junk data, and is all readable in degraded
mode anyway.
Label: 'ROOT' uuid: cc80d150-af98-4af4-bc68-c8df352bda4f
Total devices 2 FS bytes used 138.00GB
devid1 size 232.79GB used 189.04GB path /dev/sdc2
devid3 size 232.89GB used 14.06GB path
Is it possible to weight the allocations of data/system/metadata so
that data goes on large, slow drives while system/metadata goes on a
fast SSD? I don't have exact numbers, but I'd guess a vast majority
of seeks during operation are lookups of tiny bits of data, while data
reads&writes are done
Kernel 3.3.0, 64bit.
Reproduce:
mkfs.btrfs /dev/foo -s 16k -l 16k -n16k
mount /dev/foo /mnt/foo
cd /mnt/foo
btrfs su create test
hangs hard here, any attempt to access that fs also hangs hard.
Workaround: Don't do it. :)
If it makes a difference, it was a logical volume on top of MD, not a
raw pa
On Tue, Nov 8, 2011 at 3:17 PM, Chris Mason wrote:
> On Tue, Nov 08, 2011 at 01:27:28PM -0500, Chris Mason wrote:
>> On Tue, Nov 08, 2011 at 12:55:40PM -0500, Dan Merillat wrote:
>> > On Sun, Nov 6, 2011 at 1:38 PM, Chris Mason wrote:
>> > > Hi everyone,
>> &g
On Sun, Nov 6, 2011 at 1:38 PM, Chris Mason wrote:
> Hi everyone,
>
> This pull request is pretty beefy, it ended up merging a number of long
> running projects and cleanup queues. I've got btrfs patches in the new
> kernel.org btrfs repo. There are two different branches with the same
> changes
On Fri, Sep 2, 2011 at 4:42 AM, Christoph Hellwig wrote:
> On Fri, Sep 02, 2011 at 03:56:25PM +0800, Li Zefan wrote:
>> There's an off-by-one bug:
>>
>> # create a file with lots of 4K file extents
>> # btrfs fi defrag /mnt/file
>> # sync
>> # filefrag -v /mnt/file
>> Filesystem type is:
On Sat, Oct 8, 2011 at 11:35 AM, Josef Bacik wrote:
> I think I fixed this, try my git tree
>
> git://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-work.git
I wanted to Ack this as well - 3.1-rc4 was completely unusable when
firefox was running (30+ second pauses to read directories, btrfs
On Tue, Aug 30, 2011 at 11:29 PM, Dave Chinner wrote:
> On Tue, Aug 30, 2011 at 06:17:02PM -0700, Sunil Mushran wrote:
>> Instead
>> we should let the fs weigh the cost of providing accurate information
>> with the possible gain in performance.
>>
>> Data:
>> A range in a file that could contain s
> Here it is.
>
> http://marc.info/?l=linux-btrfs&m=131176036219732&w=2
That was it, thanks. Confirmed fixed.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-i
On Tue, Aug 16, 2011 at 8:51 AM, Chris Mason wrote:
> Excerpts from Dan Merillat's message of 2011-08-15 23:59:50 -0400:
> Dan Carpenter sent a patch for this, I'll get it queued up for rc3.
Can you send it? I'd like to test it to see if it fixes my system.
--
To unsubscribe from this list: sen
I noticed a series of hung_task notifications in dmesg, so I went
poking at it. Process is 'dropbox', and it's stuck trying to llseek
it's library.zip file.
strace of dropbox:
...
stat("/home/x/.dropbox-dist/library.zip", {st_mode=S_IFREG|0755,
st_size=11575179, ...}) = 0
open("/home/x/.dropbox-
On Tue, Aug 9, 2011 at 1:50 PM, David Sterba wrote:
> On Thu, Aug 04, 2011 at 09:19:26AM +0800, Miao Xie wrote:
>> > the patch has been applied on top of current linus which contains patches
>> > from
>> > both pull requests (ed8f37370d83).
>>
>> I think it is because the caller didn't reserve en
51 matches
Mail list logo