make btrfs qgroups show human readable sizes
using --human-readable option, example:
qgroupid rfer excl max_rfer max_excl parent child
-- -
0/5 299.58MiB299.58MiB400.00MiB0.00B1/
Fan Chengniang posted on Tue, 13 Jan 2015 11:12:55 +0800 as excerpted:
> make btrfs qgroups show human readable sizes using --human-readable
> option, example: [snip]
btrfs-progs uses kernel patching rules, so updated patch versions should
include a generally one-line description of what changed
Hello,
> Hello,
>
> we are currently investigating possiblities and performance limits of
> the Btrfs filesystem. Now it seems we are getting pretty poor
> performance for the writes and I would like to ask, if our results
> makes sense and if it is a result of some well known performance
> bottl
btrfs qgroup limit has two options -c and -e,. They were forgotten to add
to manpage.
Signed-off-by: Fan Chengniang
---
Documentation/btrfs-qgroup.txt | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/btrfs-qgroup.txt b/Documentation/btrfs-qgroup.txt
index 8ce1c27..7e370
make btrfs qgroups show human readable sizes
using --human-readable option, example:
qgroupid rfer excl max_rfer max_excl parent child
-- -
0/5 299.58MiB299.58MiB400.00MiB0.00B1/
Although btrfsck can rebuild the csum tree, but has the following
problems for end users or sysadmins who is not familiar with btrfs.
1) No brief info on which tree is corrupted.
In fact, after extent and chunk tree check, we iterate all the
extents and should have a brief view about which tr
Before this patch, btrfsck will exit repairing if some extent reference
can't be repaired.
This is somewhat overkilled.
This patch will report number of all errors and fixed/recorded one, and
continue in repair mode.
Signed-off-by: Qu Wenruo
---
cmds-check.c | 56 +++
The function check_chunks_and_extents() will iterate all the extents, so
we should have a brief view on which tree is corrupted according to the
backref of extents we iterated.
This patch will mark root corrupted if we find extents whose backref
can't be verified.
And after check_chunks_and_extent
If we find other tree has no corrupted/missing extent, we should be
safely to rebuild csum/extent tree.
This patch will automatically do it using the report_root_corrtuped()
result.
Signed-off-by: Qu Wenruo
---
cmds-check.c | 21 -
1 file changed, 20 insertions(+), 1 deletio
The original csum tree init codes will only rebuild the csum tree, but
don't remove the tree block extents in extent tree, and let extent tree
repair to repair all the mismatch extents.
This is OK if calling --init-csum manually, but it's confusing if
csum tree build it executed automatically, and
Hello,
Situation: btrfs on LVM on RAID5, formatted with default options,
mounted with noatime,compress=lzo, kernel 3.18.1. While recovering RAID
after drive failure, another drive gets a couple of SATA link errors,
and it corrupts the FS:
http://pp.siedziba.pl/tmp/btrfs-corruption/kern.log.t
On Mon, Jan 12, 2015 at 01:36:27PM +, Holger Hoffstätte wrote:
> On Mon, 12 Jan 2015 12:27:12 +, Hugo Mills wrote:
>
> > On Mon, Jan 12, 2015 at 11:21:58AM +, Hugo Mills wrote:
> >>I've just added a new disk to my main storage filesystem. Running
> >> the obligatory balance to spre
There isn't any real use of following members of struct btrfs_root
so delete them.
struct kobject root_kobj;
struct completion kobj_unregister;
Signed-off-by: Anand Jain
---
v2: accepts Filipe comment and commit update
fs/btrfs/ctree.h | 2 --
fs/btrfs/disk-io.c | 2 --
2 files changed, 4 d
Thanks Filipe for pointing out.
my hands moved too fast, 2 concurrent calls case didn't pop in mind.
sorry my mistake. I see other two members (root_kobj and
kobj_unregister) are still not used.
Anand
On 13/01/2015 00:17, Filipe David Manana wrote:
On Mon, Jan 12, 2015 at 4:08 PM, Anand Jai
On 2015-01-12 10:35, P. Remek wrote:
Another thing to consider is that the kernel's default I/O scheduler and the
default parameters for that I/O scheduler are almost always suboptimal for SSD's,
and this tends to show far more with BTRFS than anything else. Personally
>I've found that using
On 2015-01-12 10:11, Patrik Lundquist wrote:
On 12 January 2015 at 15:54, Austin S Hemmelgarn wrote:
Another thing to consider is that the kernel's default I/O scheduler and the
default parameters for that I/O scheduler are almost always suboptimal for
SSD's, and this tends to show far more
On Mon, Jan 12, 2015 at 4:08 PM, Anand Jain wrote:
> There isn't any real use of following members of struct btrfs_root
> so delete them.
>
> struct kobject root_kobj;
> struct completion kobj_unregister;
> struct mutex objectid_mutex;
>
> Signed-off-by: Anand Jain
> ---
> fs/btrfs/ctree.h |
There isn't any real use of following members of struct btrfs_root
so delete them.
struct kobject root_kobj;
struct completion kobj_unregister;
struct mutex objectid_mutex;
Signed-off-by: Anand Jain
---
fs/btrfs/ctree.h | 4
fs/btrfs/disk-io.c | 3 ---
fs/btrfs/inode-map.c | 2 --
3
Hi,
I've been looking at recommended cryptsetup options for Btrfs and I
have one question:
Marc uses "cryptsetup luksFormat --align-payload=1024" directly on a
disk partition and not on e.g. a striped mdraid. Is there a Btrfs
reason for that alignment?
http://marc.merlins.org/perso/btrfs/post_20
>Another thing to consider is that the kernel's default I/O scheduler and the
>default parameters for that I/O scheduler are almost always suboptimal for
>SSD's, and this tends to show far more with BTRFS than anything else.
>Personally >I've found that using the CFQ I/O scheduler with the foll
On 12 January 2015 at 15:54, Austin S Hemmelgarn wrote:
>
> Another thing to consider is that the kernel's default I/O scheduler and the
> default parameters for that I/O scheduler are almost always suboptimal for
> SSD's, and this tends to show far more with BTRFS than anything else.
> Person
On 2015-01-12 08:51, P. Remek wrote:
> Hello,
>
> we are currently investigating possiblities and performance limits of
> the Btrfs filesystem. Now it seems we are getting pretty poor
> performance for the writes and I would like to ask, if our results
> makes sense and if it is a result of some w
Hello,
we are currently investigating possiblities and performance limits of
the Btrfs filesystem. Now it seems we are getting pretty poor
performance for the writes and I would like to ask, if our results
makes sense and if it is a result of some well known performance
bottleneck.
Our setup:
Se
On Mon, 12 Jan 2015 12:27:12 +, Hugo Mills wrote:
> On Mon, Jan 12, 2015 at 11:21:58AM +, Hugo Mills wrote:
>>I've just added a new disk to my main storage filesystem. Running
>> the obligatory balance to spread the data out, it's managed about 14%
>> of the job, and then has gone into
On Mon, Jan 12, 2015 at 11:21:58AM +, Hugo Mills wrote:
>I've just added a new disk to my main storage filesystem. Running
> the obligatory balance to spread the data out, it's managed about 14%
> of the job, and then has gone into some kind of tight loop. No chunks
> have been found or bal
From: Kent Overstreet
Btrfs has been doing bio splitting from btrfs_map_bio(), by checking
device limits as well as calling ->merge_bvec_fn() etc. That is not
necessary any more, because generic_make_request() is now able to
handle arbitrarily sized bios. So clean up unnecessary code paths.
Cc:
I've just added a new disk to my main storage filesystem. Running
the obligatory balance to spread the data out, it's managed about 14%
of the job, and then has gone into some kind of tight loop. No chunks
have been found or balanced in the last 2 hours, and one kworker
thread is pegged at 100%.
27 matches
Mail list logo