Duncan 1i5t5.duncan at cox.net writes:
- How can I salvage this situation and convert to raid1?
Unfortunately I have little spare drives left. Not enough to contain
4.7TiB of data.. :(
[OK, this goes a bit philosophical, but it's something to think about...]
...
Anyway, at least
Hi all,
I'm getting some strange errors and I need some help diagnosing where the
problem is.
You can see from below that the error is csum failed ino 5641.
This is a new SSD that is running in raid1. When I first noticed the error (on
both drives) I copied all the data off the drives,
On Fri, May 02, 2014 at 10:20:03AM +, Duncan wrote:
The raid5/6 page (which I didn't otherwise see conveniently linked, I dug
It's linked off
https://btrfs.wiki.kernel.org/index.php/FAQ#Can_I_use_RAID.5B56.5D_on_my_Btrfs_filesystem.3F
it out of the recent changes list since I knew it was
On 05/02/2014 03:21 PM, Chris Murphy wrote:
On May 2, 2014, at 2:23 AM, Duncan 1i5t5.dun...@cox.net wrote:
Something tells me btrfs replace (not device replace, simply
replace) should be moved to btrfs device replaceā¦
The syntax for btrfs device is different though; replace is like
On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
On 05/02/2014 03:21 PM, Chris Murphy wrote:
On May 2, 2014, at 2:23 AM, Duncan 1i5t5.dun...@cox.net wrote:
Something tells me btrfs replace (not device replace, simply
replace) should be moved to btrfs device
On May 3, 2014, at 1:09 PM, Chris Murphy li...@colorremedies.com wrote:
On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
On 05/02/2014 03:21 PM, Chris Murphy wrote:
Btrfs raid1 with 3+ devices is unique as far as I can tell. It is
something like raid1 (2
# btrfs scrub status /mnt/backup/
scrub status for 97972ab2-02f7-42dd-a23b-d92efbf9d9b5
scrub started at Thu May 1 14:29:57 2014 and finished after 97253
seconds
total bytes scrubbed: 1.11TB with 13684 errors
error details: read=13684
corrected errors: 2113,
Are there any plans for a feature like the ZFS copies= option?
I'd like to be able to set copies= separately for data and metadata. In most
cases RAID-1 provides adequate data protection but I'd like to have RAID-1 and
copies=2 for metadata so that if one disk dies and another has some bad
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as excerpted:
Are there any plans for a feature like the ZFS copies= option?
I'd like to be able to set copies= separately for data and metadata. In
most cases RAID-1 provides adequate data protection but I'd like to have
RAID-1 and
Another question I just came up with.
If I have historical snapshots like so:
backup
backup.sav1
backup.sav2
backup.sav3
If I want to copy them up to another server, can btrfs send/receive
let me copy all of the to another btrfs pool while keeping the
duplicated block relationship between all of
(more questions I'm asking myself while writing my talk slides)
I know Suse uses btrfs to roll back filesystem changes.
So I understand how you can take a snapshot before making a change, but
not how you revert to that snapshot without rebooting or using rsync,
How do you do a pivot-root like
So, I was thinking. In the past, I've done this:
mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*
My rationale at the time was that if I lose a drive, I'll still have
full metadata for the entire filesystem and only missing files.
If I have raid1 with 2 drives, I should end up with
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/usr /usr
?
Thanks,
Marc
--
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems
13 matches
Mail list logo