When I do a
btrfs filesystem defragment -r /directory
does it defragment really all files in this directory tree, even if it
contains subvolumes?
The man page does not mention subvolumes on this topic.
I have an older script (written by myself) which does a
"btrfs filesystem defragment -r" on al
On Thu, Aug 31, 2017 at 08:53:09AM +0900, Misono, Tomohiro wrote:
> On 2017/08/30 20:09, Eryu Guan wrote:
> > On Wed, Aug 30, 2017 at 04:38:16PM +0900, Misono, Tomohiro wrote:
> >> btrfs/029 uses _filter_testdirs() to filter the name of $TEST_DIR and
> >> $SCRATCH_MNT directory.
> >>
> >> In this f
Some static functions are needlessly forward declared. Let's remove those
declarations since they add no value.
Signed-off-by: Nikolay Borisov
---
Here is a less invasive version of my previous patch removing the fwd
declarations in extent-tree. This time I've limited myself to only those
decl
Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due to its size?
# time mount /data/R6HW
real1m32.383s
user0m0.000s
sys 0m1.348s
# time umount /data/R6HW
real0m2.562s
user
Hello again list. I thought I would clear the things out and describe what is
happening with my troubled RAID setup.
So having received the help from the list, I've initially run the full
defragmentation of all the data and recompressed everything with zlib.
That didn't help. Then I run the ful
On 08/31/2017 12:43 PM, Marco Lorenzo Crociani wrote:
> Hi,
> this 37T filesystem took some times to mount. It has 47
> subvolumes/snapshots and is mounted with
> noatime,compress=zlib,space_cache. Is it normal, due to its size?
Yes, unfortunately it is. It depends on the size of the metadata exte
Print Device slack: 0.00B
instead of Device slack: 16.00EiB
Signed-off-by: Patrik Lundquist
---
cmds-fi-usage.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmds-fi-usage.c b/cmds-fi-usage.c
index 101a0c4..6c846c1 100644
--- a/cmds-fi-usage.c
I'm going to pile on this thread because I have the same issue. I've
seen this twice in just the past 2 days on a filesystem that was
created a few weeks ago. Un-mounting and mounting again with no
special options gets the filesystem back.
[Aug31 02:59] BTRFS: Transaction aborted (error -17)
[
On 2017-08-31 02:49, Ulli Horlacher wrote:
On Thu 2017-08-24 (18:45), Peter Grandi wrote:
As usual with Btrfs, there are corner cases to avoid: 'defrag'
should be done before 'balance'
Good hint. So far I did it the other way: balance before defrag.
I will switch.
For reference, the reason to
On 2017-08-31 07:00, Hans van Kranenburg wrote:
On 08/31/2017 12:43 PM, Marco Lorenzo Crociani wrote:
Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due to its size?
Yes, unfortunately it is.
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani wrote:
> Hi,
> this 37T filesystem took some times to mount. It has 47
> subvolumes/snapshots and is mounted with
> noatime,compress=zlib,space_cache. Is it normal, due to its size?
If you could implement SSD caching in front of your FS
On 2017-08-31 07:36, Roman Mamedov wrote:
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani wrote:
Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due to its size?
If you could imple
On Thu, 31 Aug 2017 07:45:55 -0400
"Austin S. Hemmelgarn" wrote:
> If you use dm-cache (what LVM uses), you need to be _VERY_ careful and
> can't use it safely at all with multi-device volumes because it leaves
> the underlying block device exposed.
It locks the underlying device so it can't b
On 2017年08月31日 19:36, Roman Mamedov wrote:
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani wrote:
Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due to its size?
Just like Han s
On 08/31/2017 01:18 PM, Austin S. Hemmelgarn wrote:
> [...]
>> Any hint here?
> Having compression enabled causes no issues with defray and balance.
> There appears to be a prevalent belief however that defrag is
> pointless if you're using compression, probably because some versions
> of `filefrag
From: Josef Bacik
We were having corruption issues that were tied back to problems with the extent
tree. In order to track them down I built this tool to try and find the
culprit, which was pretty successful. If you compile with this tool on it will
live verify every ref update that the fs make
Hello,
Sorry I really thought I could accomplish this with BPF, but ref tracking is
just too complicated to work properly with BPF. I forward ported my ref
verification patch to the latest kernel, you can find it in the btrfs-readdir
branch of my btrfs-next tree here
git://git.kernel.org/pub/
tree: https://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next.git
btrfs-readdir
head: fcde4ff2122bcd230de62daec6d466631666d284
commit: fcde4ff2122bcd230de62daec6d466631666d284 [4/4] Btrfs: add a extent ref
verify tool
config: xtensa-allmodconfig (attached as .config)
compiler: xtensa
tree: https://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next.git
btrfs-readdir
head: fcde4ff2122bcd230de62daec6d466631666d284
commit: fcde4ff2122bcd230de62daec6d466631666d284 [4/4] Btrfs: add a extent ref
verify tool
config: alpha-allmodconfig (attached as .config)
compiler: alpha-l
From: Josef Bacik
We were having corruption issues that were tied back to problems with the extent
tree. In order to track them down I built this tool to try and find the
culprit, which was pretty successful. If you compile with this tool on it will
live verify every ref update that the fs make
Michał Sokołowski posted on Thu, 31 Aug 2017 16:38:14 +0200 as excerpted:
> On 08/31/2017 01:18 PM, Austin S. Hemmelgarn wrote:
>> [...]
>>> Any hint here?
>> Having compression enabled causes no issues with defray and balance.
>> There appears to be a prevalent belief however that defrag is point
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It seems that it
is not visible the full disk.
$ uname -a
Linux venice.bhome 4.12.8 #268 SMP Thu Aug 17 09:03:26 CEST 2017 x86_64
GNU/Linux
$ btrfs --version
btrfs-progs v4.12
--- First try without '-r' (/dev/sda is about
On Thu, Aug 31, 2017 at 02:52:56PM +, Josef Bacik wrote:
> Hello,
>
> Sorry I really thought I could accomplish this with BPF, but ref tracking is
> just too complicated to work properly with BPF. I forward ported my ref
> verification patch to the latest kernel, you can find it in the btrf
We are using 4.11 in production at fb with backports from recent (a month ago?)
stuff. I’m relatively certain nothing bad will happen, and this branch has the
most recent fsync() corruption fix (which exists in your kernel so it’s not
new). That said if you are uncomfortable I can rebase this
I'm having issues with a bad block(?) on my root ssd.
dmesg is consistently outputting "BTRFS critical (device sda2):
corrupt leaf, bad key order: block=293438636032, root=1, slot=11"
"btrfs scrub stat /" outputs "scrub status for b2c9ff7b-[snip]-48a02cc4f508
scrub started at Wed Aug 30 11:51:49
On Thu, Aug 31, 2017 at 01:53:58PM -0400, Eric Wolf wrote:
> I'm having issues with a bad block(?) on my root ssd.
>
> dmesg is consistently outputting "BTRFS critical (device sda2):
> corrupt leaf, bad key order: block=293438636032, root=1, slot=11"
>
> "btrfs scrub stat /" outputs "scrub status
leaf 293438636032 items 153 free space 2820 generation 5389981 owner 267
fs uuid b2c9ff7b-[snip]-48a02cc4f508
chunk uuid e60d16b9-ca53-45b3-a47a-e0a146046894
item 0 key (890550 INODE_REF 31762) itemoff 16260 itemsize 23
inode ref index 2727 namelen 13 name: dpkg.status.0
item 1 key (890550 EXT
On 2017-08-31 13:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It seems that it
is not visible the full disk.
$ uname -a
Linux venice.bhome 4.12.8 #268 SMP Thu Aug 17 09:03:26 CEST 2017 x86_64
GNU/Linux
$ btrfs --version
btrfs-progs v4.1
Also, I know it was caused by bad RAM and that ram has since been removed.
---
Eric Wolf
(201) 316-6098
19w...@gmail.com
On Thu, Aug 31, 2017 at 2:33 PM, Hugo Mills wrote:
> On Thu, Aug 31, 2017 at 01:53:58PM -0400, Eric Wolf wrote:
>> I'm having issues with a bad block(?) on my root ssd.
>>
>>
(Please don't top-post; edited for conversation flow)
On Thu, Aug 31, 2017 at 02:44:39PM -0400, Eric Wolf wrote:
> On Thu, Aug 31, 2017 at 2:33 PM, Hugo Mills wrote:
> > On Thu, Aug 31, 2017 at 01:53:58PM -0400, Eric Wolf wrote:
> >> I'm having issues with a bad block(?) on my root ssd.
> >>
>
I've previously confirmed it's a bad ram module which I have already
submitted an RMA for. Any advice for manually fixing the bits?
Sorry for top leveling, not sure how mailing lists work (again sorry
if this message is top leveled, how do I ensure it's not?)
---
Eric Wolf
(201) 316-6098
19w...@gm
On Thu, Aug 31, 2017 at 03:21:07PM -0400, Eric Wolf wrote:
> I've previously confirmed it's a bad ram module which I have already
> submitted an RMA for. Any advice for manually fixing the bits?
What I'd do... use a hex editor and the contents of ctree.h as
documentation to find the byte in que
On 2017-08-31 20:49, Austin S. Hemmelgarn wrote:
> On 2017-08-31 13:27, Goffredo Baroncelli wrote:
>> Hi All,
>>
>>
>> I found a bug in mkfs.btrfs, when it is used the option '-r'. It
>> seems that it is not visible the full disk.
>>
>> $ uname -a Linux venice.bhome 4.12.8 #268 SMP Thu Aug 17 09
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It seems that it
is not visible the full disk.
Despite the new bug you found, -r has several existing bugs.
For example it will create dev extent starting from physical o
On 2017/08/31 16:33, Eryu Guan wrote:
> On Thu, Aug 31, 2017 at 08:53:09AM +0900, Misono, Tomohiro wrote:
>> On 2017/08/30 20:09, Eryu Guan wrote:
>>> On Wed, Aug 30, 2017 at 04:38:16PM +0900, Misono, Tomohiro wrote:
btrfs/029 uses _filter_testdirs() to filter the name of $TEST_DIR and
$S
Hey folks,
I thought I would finally take a swing at things I've wanted to be an
kernel/fs dev fora few years now. My current $job is as an
Infrastructure Engineer. I'm currently teaching myself C and have
background in shell scripting & python. I love doing deep dives and
learning about linux int
On Fri, Sep 01, 2017 at 09:44:59AM +0900, Misono, Tomohiro wrote:
> Ok. I will do that if you won't, though I'm not sure other combination of
> filters would pose the similar problem.
Thanks! Then I'll test :)
Eryu
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the
Hey.
Just got the following call trace with:
$ uname -a
Linux heisenberg 4.12.0-1-amd64 #1 SMP Debian 4.12.6-1 (2017-08-12) x86_64
GNU/Linux
$ btrfs version
btrfs-progs v4.12
Sep 01 06:10:12 heisenberg kernel: [ cut here ]
Sep 01 06:10:12 heisenberg kernel: WARNING: CPU:
On 2017年09月01日 11:36, Anthony Riley wrote:
Hey folks,
I thought I would finally take a swing at things I've wanted to be an
kernel/fs dev fora few years now. My current $job is as an
Infrastructure Engineer. I'm currently teaching myself C and have
background in shell scripting & python. I lov
Several tests uses both _filter_test_dir and _filter_scratch
concatenated by pipe to filter $TEST_DIR and $SCRATCH_MNT. However, this
would fail if the shorter string is a substring of the other (like
"/mnt" and "/mnt2").
This patch introduces new common filter function to safely call both
_filter
Use newly introduced common function to filter both $TEST_DIR and
$SCRATCH_MNT.
Signed-off-by: Tomohiro Misono
---
common/filter | 2 +-
tests/btrfs/029 | 11 +++
tests/generic/409 | 3 +--
tests/generic/410 | 3 +--
tests/generic/411 | 3 +--
5 files changed, 7 insertions(+),
It does not look correct to access &be->node on line 322 after freeing be
on line 321.
julia
-- Forwarded message --
Date: Fri, 1 Sep 2017 07:41:26 +0800
From: kbuild test robot
To: kbu...@01.org
Cc: Julia Lawall
Subject: [josef-btrfs:btrfs-readdir 4/5] fs/btrfs/ref-verify.c:322
42 matches
Mail list logo