Here's some thoughts:
> Assume a CD sized (680MB) /boot
Some distros carry patches for grub that allow booting from Btrfs, so
no separate /boot file system is required. (Fedora does not; Ubuntu --
and therefore probably all Debians -- does.)
> perhaps a 200MB (?) sized EFI partition
Way bigger
> Mitchell wrote:
> With RAID10, there's still only 1 other copy, but the entire "original"
disk is mirrored to another one, right?
No, full disks are never mirrored in any configuration.
Here's how I understand Btrfs' non-parity redundancy profiles:
single: only a single instance of a file exis
I'm not an expert by any means, but I did a migration like this a few weeks ago.
The most consistent recommendation on this mailing list is to use the
newest kernels and btrfs-progs feasible. I did my migration using
Fedora 24 live media, which at the time was kernel ~4.3. I see your
btrfs-progs i
/var/media/backups/venus/home/"
Any idea what's happening? I can't find a single example online of
sending a delta over ssh.
Thanks,
Justin
On Thu, Oct 2, 2014 at 1:53 AM, Hugo Mills wrote:
> On Thu, Oct 02, 2014 at 12:05:39AM -0500, Justin Brown wrote:
>> I'm experi
I'm experimenting with btrfs-send. Previously (2014-09-26), I did my
first btrfs-send on a subvol, and that worked fine. Today, I tried to
send a new snapshot. Unfortunately, I realized part way through that I
forgot to specify the parent to only send a delta, and killed the send
with ^C.
On the d
O that was in progress
when the original failure occurred. Fortunately, it was all data that
could be recovered from other systems, and there wasn't any need to
troubleshoot the errors.
Thanks,
Justin
On Wed, May 28, 2014 at 3:40 PM, Chris Murphy wrote:
>
> On May 28, 2014, at 1
Chris,
Thanks for the tip. I was able to mount the drive as degraded and
recovery. Then, I deleted the faulty drive, leaving me with the
following array:
Label: media uuid: 7b7afc82-f77c-44c0-b315-669ebd82f0c5
Total devices 6 FS bytes used 2.40TiB
devid1 size 931.51GiB used 919.88GiB path
Hi,
I have a Btrfs RAID 10 (data and metadata) file system that I believe
suffered a disk failure. In my attempt to replace the disk, I think
that I've made the problem worse and need some help recovering it.
I happened to notice a lot of errors in the journal:
end_request: I/O error, dev dm-11,
Absolutely. I'd like to know the answer to this, as 13 tera will take
a considerable amount of time to back up anywhere, assuming I find a
place. I'm considering rebuilding a smaller raid with newer drives
(it was originally built using 16 250 gig western digital drives, it's
about eleven years o
ear of things breaking, but
both have been reading from it without issue other than the noticeable
impact in performance balance seems to be having. Thanks for the
help.
-Justin
On Fri, Feb 28, 2014 at 12:26 AM, Chris Murphy wrote:
>
> On Feb 27, 2014, at 11:13 PM, Chris Murphy wro
I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
need of help. Disk usage (du) shows 13 tera allocated yet strangely
enough df shows approx. 780 gigs are free. It seems, somehow, btrfs
has eaten roughly 4 tera internally. I've run a scrub and a balance
usage=5 with no success
Chris,
Thanks for the reply.
> Total includes metadata.
It still doesn't seem to add up:
~$ btrfs fi df t
Data, single: total=8.00MiB, used=0.00
Data, RAID6: total=2.17TiB, used=2.17TiB
System, single: total=4.00MiB, used=0.00
System, RAID6: total=9.56MiB, used=192.00KiB
Metadata, single: total
Hello,
I'm finishing up my data migration to Btrfs, and I've run into an
error that I'm trying to explore in more detail. I'm using Fedora 20
with Btrfs v0.20-rc1.
My array is a 5 disk (4x 1TB and 1x 2TB) RAID 6 (-d raid6 -m raid6). I
completed my rsync to this array, and I figured that it would
13 matches
Mail list logo