On 25/03/14 07:15, Slava Barinov wrote:
Hello,
I've been using a single drive btrfs for some time and when free space
became too low I've added an additional drive and rebalanced FS with
RAID0 data and RAID1 System and Metadata storage.
Now I have the following configuration:
# btrfs
On 25/03/14 03:29, Marc MERLIN wrote:
On Tue, Mar 25, 2014 at 01:11:43AM +, Martin wrote:
There's a big thread a short while ago about using parity across
n-devices where the parity is spread such that you can have 1, 2, and up
to 6 redundant devices. Well beyond just raid5 and raid6:
Hendrik Friedel posted on Mon, 24 Mar 2014 21:52:09 +0100 as excerpted:
But regardless of my experience with my own usage pattern, I suspect
that with reasonable monitoring, you'll eventually become familiar with
how fast the chunks are allocated and possibly with what sort of
actions beyond
This patch uses the new BTRFS_SUBVOL_CREATE_SUBVOLID flag to create snapshots
by subvolume ID.
usage: btrfs subvolume snapshot [-r] [-q qgroupid] -s subvolid dest/name
Since we don't have a name for the source snapshot, the complete path to
the destination must be specified.
A previous version
On Tue, Mar 25, 2014 at 09:37:13AM -0400, Jeff Mahoney wrote:
This patch uses the new BTRFS_SUBVOL_CREATE_SUBVOLID flag to create snapshots
by subvolume ID.
usage: btrfs subvolume snapshot [-r] [-q qgroupid] -s subvolid
dest/name
Since we don't have a name for the source snapshot, the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 3/25/14, 9:48 AM, Hugo Mills wrote:
On Tue, Mar 25, 2014 at 09:37:13AM -0400, Jeff Mahoney wrote:
This patch uses the new BTRFS_SUBVOL_CREATE_SUBVOLID flag to
create snapshots by subvolume ID.
usage: btrfs subvolume snapshot [-r] [-q
Le 25 mars 2014 à 12:13, Martin a écrit:
On 25/03/14 01:49, Marc MERLIN wrote:
It took 18H to rm it in 3 tries:
And is not *the 512kByte raid chunk* going to give you horrendous write
amplification?! For example, rm updates a few bytes in one 4kByte
metadata block and the system has to then
scrub_progress_cycle thread runs in asynchronous type but locks mutex
while reading shared data. This patch disables cancelability for a
brief time while locks are on so as to make sure they are unlocked
before thread is canceled.
scrub_write_progress gets called from scrub_progress_cycle in
Martin posted on Tue, 25 Mar 2014 00:57:05 + as excerpted:
https://btrfs.wiki.kernel.org/index.php/Mount_options
autodefrag (since [kernel] 3.0)
Will detect random writes into existing files and kick off background
defragging. It is well suited to bdb or sqlite databases, but not
On Tue, Mar 25, 2014 at 12:13:50PM +, Martin wrote:
On 25/03/14 01:49, Marc MERLIN wrote:
I had a tree with some amount of thousand files (less than 1 million)
on top of md raid5.
It took 18H to rm it in 3 tries:
I ran another test after typing the original Email:
Brendan Hide posted on Tue, 25 Mar 2014 08:42:17 +0200 as excerpted:
The raid0 will always distribute data to each disk relatively equally.
There are exceptions of course. The way to have it better utilise the
diskspace is to use either single (which won't get the same
performance as raid0)
E V posted on Tue, 25 Mar 2014 14:56:08 -0400 as excerpted:
Plenty of meta data space and such:
Metadata, RAID1: total=134.00GiB, used=87.27GiB
Metadata, single: total=8.00MiB, used=0.00
just seems to be some blocks that the profile covert fails on that a
regular balance process fine.
What about unallocated space, that is, the difference between total space
and used space, per device, as reported by btrfs filesystem show?
File show gives:
Total devices 3 FS bytes used 60.22TiB
devid1 size 30.01TiB used 17.61TiB path /dev/sdb
devid2 size
Hi,
Well, given the relative immaturity of btrfs as a filesystem at this
point in its lifetime, I think it's acceptable/tolerable. However, for a
filesystem feted[1] to ultimately replace the ext* series as an assumed
Linux default, I'd definitely argue that the current situation should be
On Tue, Mar 25, 2014 at 09:03:26PM +0100, Hendrik Friedel wrote:
Hi,
Well, given the relative immaturity of btrfs as a filesystem at this
point in its lifetime, I think it's acceptable/tolerable. However, for a
filesystem feted[1] to ultimately replace the ext* series as an assumed
Linux
E V posted on Tue, 25 Mar 2014 15:47:45 -0400 as excerpted:
System, single: total=4.00MiB, used=0.00
BTW, that and the corresponding Metadata, single: entries, both
used=0.00, are artifacts of the original mkfs.btrfs, and can be safely
removed via balance. I've started doing that here right
Hi all,
In the process to relicense btrfs-progs to LGPLv2.1[1], we have 40 of
114 approvals so far. I have sent mass-emails to everyone with more than
1 commit, and am now sending individual emails down the list to people I
haven't heard from.
(I'm waiting to tackle the single-commit people
Hugo Mills posted on Tue, 25 Mar 2014 20:10:20 + as excerpted:
Did you mean fated: intended, destined?
No, I meant feted, altho I understand in Europe the first e would
likely have a carot-hat (fêted), but us US-ASCII folks don't have such a
thing easily available, so unless I copy/paste
On Tue, Mar 25, 2014 at 09:28:20PM +, Duncan wrote:
Hugo Mills posted on Tue, 25 Mar 2014 20:10:20 + as excerpted:
Did you mean fated: intended, destined?
No, I meant feted, altho I understand in Europe the first e would
likely have a carot-hat (fêted), but us US-ASCII folks
19 matches
Mail list logo