On Apr 20, 2014, at 11:48 PM, Marc MERLIN m...@merlins.org wrote:
On Sun, Apr 20, 2014 at 11:39:22PM -0600, Chris Murphy wrote:
On Apr 20, 2014, at 1:46 PM, Marc MERLIN m...@merlins.org wrote:
Can you help me design this right?
Long story short, I'm wondering if I can use btrfs send to
I experimented with RAID5, but now I want to get rid of it:
$ sudo btrfs balance start -dconvert=raid1,soft -v /
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x300): converting, target=16, soft is on
ERROR: error during balancing '/' - No space left on device
There may be
Chris Murphy posted on Sun, 20 Apr 2014 14:26:37 -0600 as excerpted:
On Apr 20, 2014, at 2:18 PM, Chris Murphy li...@colorremedies.com wrote:
What is unknown?
/dev/sd[bcd] are 2GB, 3GB, and 4GB respectively.
[root@localhost ~]# mkfs.btrfs -d raid0 -m raid1 /dev/sd[bcd]
[...]
In utils.c, zero_end is used as a parameter, should not force it to 1.
In mkfs.c, zero_end is set to 1 or 0(-b) at the beginning, should not
force it to 1 unconditionally.
Signed-off-by: Li Yang liyang.f...@cn.fujitsu.com
---
mkfs.c |1 -
utils.c |1 -
2 files changed, 0 insertions(+),
For system chunk array,
We copy a disk_key and an chunk item each time,
so there should be enough space to hold both of them,
not only the chunk item.
Signed-off-by: Gui Hecheng guihc.f...@cn.fujitsu.com
---
volumes.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git
For RAID0,5,6,10,
For system chunk, there shouldn't be too many stripes to
make a btrfs_chunk that exceeds BTRFS_SYSTEM_CHUNK_ARRAY_SIZE
For data/meta chunk, there shouldn't be too many stripes to
make a btrfs_chunk that exceeds a leaf.
Signed-off-by: Gui Hecheng guihc.f...@cn.fujitsu.com
---
For RAID0,5,6,10,
For system chunk, there shouldn't be too many stripes to
make a btrfs_chunk that exceeds BTRFS_SYSTEM_CHUNK_ARRAY_SIZE
For data/meta chunk, there shouldn't be too many stripes to
make a btrfs_chunk that exceeds a leaf.
Signed-off-by: Gui Hecheng guihc.f...@cn.fujitsu.com
---
For system chunk array,
We copy a disk_key and an chunk item each time,
so there should be enough space to hold both of them,
not only the chunk item.
Signed-off-by: Gui Hecheng guihc.f...@cn.fujitsu.com
---
fs/btrfs/volumes.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git
On 04/20/2014 04:18 PM, Chris Murphy wrote:
kernel 3.15.0-0.rc1.git0.1.fc21.x86_64
btrfs-progs v3.14
One 80GB virtual disk, formatted btrfs by installer and Fedora Rawhide
installed to it. Post-install I see:
[root@localhost ~]# btrfs fi show
Label: 'fedora' uuid:
We have a big problem, but it involves a lot of moving parts, so I'm
going to
explain all of the parts, and then the problem, and then what I am doing
to fix
the problem. I want you guys to check my work to make sure I'm not missing
something so when I come back from paternity leave in a few
Does anyone encounter this problem? and dose anyone have solution to it?
Today, I have changed the compress max size from 512KB to 16KB:
in cow_file_range_async function,
cur_end = min(end, start + 512 * 1024 - 1);
--
cur_end = min(end, start + 16 * 1024 - 1);
This bug can be reproduced almost
Kernel 3.15.0-rc2, btrfs-progs 3.14.1
While doing some minor package updates my btrfs root partition [*]
decided to corrupt itself. There was no system crash, although I had
plenty of these (due to an USB-related regression) in recent weeks that
resulted in no trouble.
First only one of a
Alright, turns out the partition does actually mount on 3.15-rc2 (error
messages remain, of course).
But systemd will fail to continue booting as /bin/mount returns exit
status 32 and / thus ends as ro, yet can be manually remounted as rw.
Another error message I've spotted with 3.15 is
Adam Brenner posted on Sun, 20 Apr 2014 21:56:10 -0700 as excerpted:
So ... BTRFS at this point in time, does not actually stripe the data
across N number of devices/blocks for aggregated performance increase
(both read and write)?
What Chris says is correct, but just in case it's unclear as
On Mon, Apr 21, 2014 at 12:08:30AM -0600, Chris Murphy wrote:
I see hard links as completely different to either
subvolume/snapshot/reflink. Three hardlinks for a file all point to one file,
they're aren't four unique files. But with the latter, three reflinks or
snapshots are independent
Josef Bacik posted on Mon, 21 Apr 2014 07:55:46 -0700 as excerpted:
[Near the bottom, point #4 immediately before conclusion.]
You still have to post-process merge to make sure, but you are far more
likely to merge everything in real-time since you are only changing the
sequence number every
Duncan posted on Mon, 21 Apr 2014 05:44:54 + as excerpted:
Marc MERLIN posted on Sun, 20 Apr 2014 12:59:01 -0700 as excerpted:
I was looking at using qgroups for my backup server, which will be
filled with millions of files in subvolumes with snapshots.
I read a warning that quota
Chris Mason posted on Mon, 21 Apr 2014 08:41:34 -0400 as excerpted:
3.15 has this commit, it's the cause of the unknown.
[Since I already replied to thread.]
That would explain why I haven't seen it yet. I'm still running kernel
3.14 as I'm trying to catch up on some other stuff before I
On Mon, Apr 21, 2014 at 10:45:43PM +, Duncan wrote:
New information. See Josef Bacik's new thread:
Very good info, thank you.
It looks however like the use case I'm looking at (mostly write once backups
with snapshots), should not be affected.
I'll give it a shot, and if performance
Andreas Reis posted on Mon, 21 Apr 2014 21:13:16 +0200 as excerpted:
Alright, turns out the partition does actually mount on 3.15-rc2 (error
messages remain, of course).
But systemd will fail to continue booting as /bin/mount returns exit
status 32 and / thus ends as ro, yet can be manually
Arjen Nienhuis posted on Mon, 21 Apr 2014 08:32:56 +0200 as excerpted:
I experimented with RAID5, but now I want to get rid of it:
$ sudo btrfs balance start -dconvert=raid1,soft -v /
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x300): converting, target=16, soft is on
21 matches
Mail list logo