Here's some thoughts:
> Assume a CD sized (680MB) /boot
Some distros carry patches for grub that allow booting from Btrfs, so
no separate /boot file system is required. (Fedora does not; Ubuntu --
and therefore probably all Debians -- does.)
> perhaps a 200MB (?) sized EFI partition
Way bigger
> Mitchell wrote:
> With RAID10, there's still only 1 other copy, but the entire "original"
disk is mirrored to another one, right?
No, full disks are never mirrored in any configuration.
Here's how I understand Btrfs' non-parity redundancy profiles:
single: only a single instance of a file
I'm not an expert by any means, but I did a migration like this a few weeks ago.
The most consistent recommendation on this mailing list is to use the
newest kernels and btrfs-progs feasible. I did my migration using
Fedora 24 live media, which at the time was kernel ~4.3. I see your
btrfs-progs
On Thu, Oct 2, 2014 at 1:53 AM, Hugo Mills h...@carfax.org.uk wrote:
On Thu, Oct 02, 2014 at 12:05:39AM -0500, Justin Brown wrote:
I'm experimenting with btrfs-send. Previously (2014-09-26), I did my
first btrfs-send on a subvol, and that worked fine. Today, I tried to
send a new snapshot
I'm experimenting with btrfs-send. Previously (2014-09-26), I did my
first btrfs-send on a subvol, and that worked fine. Today, I tried to
send a new snapshot. Unfortunately, I realized part way through that I
forgot to specify the parent to only send a delta, and killed the send
with ^C.
On the
, Justin Brown justin.br...@fandingo.org wrote:
Chris,
Thanks for the tip. I was able to mount the drive as degraded and
recovery. Then, I deleted the faulty drive, leaving me with the
following array:
Label: media uuid: 7b7afc82-f77c-44c0-b315-669ebd82f0c5
Total devices 6 FS bytes used
Hi,
I have a Btrfs RAID 10 (data and metadata) file system that I believe
suffered a disk failure. In my attempt to replace the disk, I think
that I've made the problem worse and need some help recovering it.
I happened to notice a lot of errors in the journal:
end_request: I/O error, dev
Chris,
Thanks for the tip. I was able to mount the drive as degraded and
recovery. Then, I deleted the faulty drive, leaving me with the
following array:
Label: media uuid: 7b7afc82-f77c-44c0-b315-669ebd82f0c5
Total devices 6 FS bytes used 2.40TiB
devid1 size 931.51GiB used 919.88GiB
, at 11:19 AM, Justin Brown otakujunct...@gmail.com wrote:
terra:/var/lib/nobody/fs/ubfterra # btrfs fi df .
Data, single: total=17.58TiB, used=17.57TiB
System, DUP: total=8.00MiB, used=1.93MiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=392.00GiB, used=33.50GiB
Metadata
Absolutely. I'd like to know the answer to this, as 13 tera will take
a considerable amount of time to back up anywhere, assuming I find a
place. I'm considering rebuilding a smaller raid with newer drives
(it was originally built using 16 250 gig western digital drives, it's
about eleven years
Hello,
I'm finishing up my data migration to Btrfs, and I've run into an
error that I'm trying to explore in more detail. I'm using Fedora 20
with Btrfs v0.20-rc1.
My array is a 5 disk (4x 1TB and 1x 2TB) RAID 6 (-d raid6 -m raid6). I
completed my rsync to this array, and I figured that it would
Chris,
Thanks for the reply.
Total includes metadata.
It still doesn't seem to add up:
~$ btrfs fi df t
Data, single: total=8.00MiB, used=0.00
Data, RAID6: total=2.17TiB, used=2.17TiB
System, single: total=4.00MiB, used=0.00
System, RAID6: total=9.56MiB, used=192.00KiB
Metadata, single:
12 matches
Mail list logo