Hi, Marc
Raid0 is not redundant in any way. See inline below.
On 2014/05/04 01:27 AM, Marc MERLIN wrote:
So, I was thinking. In the past, I've done this:
mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*
My rationale at the time was that if I lose a drive, I'll still have
full me
(more questions I'm asking myself while writing my talk slides)
I know Suse uses btrfs to roll back filesystem changes.
So I understand how you can take a snapshot before making a change, but
not how you revert to that snapshot without rebooting or using rsync,
How do you do a pivot-root like mo
So, I was thinking. In the past, I've done this:
mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*
My rationale at the time was that if I lose a drive, I'll still have
full metadata for the entire filesystem and only missing files.
If I have raid1 with 2 drives, I should end up with
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/usr /usr
?
Thanks,
Marc
--
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems
Another question I just came up with.
If I have historical snapshots like so:
backup
backup.sav1
backup.sav2
backup.sav3
If I want to copy them up to another server, can btrfs send/receive
let me copy all of the to another btrfs pool while keeping the
duplicated block relationship between all of
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as excerpted:
> Are there any plans for a feature like the ZFS copies= option?
>
> I'd like to be able to set copies= separately for data and metadata. In
> most cases RAID-1 provides adequate data protection but I'd like to have
> RAID-1 a
Are there any plans for a feature like the ZFS copies= option?
I'd like to be able to set copies= separately for data and metadata. In most
cases RAID-1 provides adequate data protection but I'd like to have RAID-1 and
copies=2 for metadata so that if one disk dies and another has some bad
sec
# btrfs scrub status /mnt/backup/
scrub status for 97972ab2-02f7-42dd-a23b-d92efbf9d9b5
scrub started at Thu May 1 14:29:57 2014 and finished after 97253
seconds
total bytes scrubbed: 1.11TB with 13684 errors
error details: read=13684
corrected errors: 2113, uncorr
On May 3, 2014, at 1:09 PM, Chris Murphy wrote:
>
> On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn wrote:
>
>> On 05/02/2014 03:21 PM, Chris Murphy wrote:
>>>
>>> Btrfs raid1 with 3+ devices is unique as far as I can tell. It is
>>> something like raid1 (2 copies) + linear/concat. But that
On 05/03/2014 03:09 PM, Chris Murphy wrote:
>
> On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn wrote:
>
>> On 05/02/2014 03:21 PM, Chris Murphy wrote:
>>>
>>> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
Something tells me btrfs replace (not device replace, simply
Hi Josef,
this problem could not happen when find_free_extent() was receiving a
transaction handle (which was changed in "Btrfs: avoid starting a
transaction in the write path"), correct? Because it would have used
the passed transaction handle to do the chunk allocation, and thus
would not need to
Hi Josef,
how about aborting the transaction also in place that you print:
"umm, got %d back from search, was looking for %llu"
You abort in case ret<0, otherwise the code just proceeds with
extent_slot = path->slots[0];
which can't be right in that case.
Thanks,
Alex.
On Mon, Mar 17, 2014 at 3:5
On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn wrote:
> On 05/02/2014 03:21 PM, Chris Murphy wrote:
>>
>> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>>>
>>> Something tells me btrfs replace (not device replace, simply
>>> replace) should be moved to btrfs device replace
On 05/02/2014 03:21 PM, Chris Murphy wrote:
>
> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>>
>> Something tells me btrfs replace (not device replace, simply
>> replace) should be moved to btrfs device replaceā¦
>
> The syntax for "btrfs device" is different though; replace
On Fri, May 02, 2014 at 10:20:03AM +, Duncan wrote:
> The raid5/6 page (which I didn't otherwise see conveniently linked, I dug
It's linked off
https://btrfs.wiki.kernel.org/index.php/FAQ#Can_I_use_RAID.5B56.5D_on_my_Btrfs_filesystem.3F
> it out of the recent changes list since I knew it was
Hi Jaap,
This patch http://www.spinics.net/lists/linux-btrfs/msg33025.html made
it into 3.15 RC2 so if you're willing to build your own RC kernel you
may have better luck with scrub in 3.15. The patch only scrubs the
data blocks in RAID5/6 so hopefully your parity blocks are intact. I'm
not sure i
Hi all,
I'm getting some strange errors and I need some help diagnosing where the
problem is.
You can see from below that the error is "csum failed ino 5641".
This is a new SSD that is running in raid1. When I first noticed the error (on
both drives) I copied all the data off the drives, reforma
Duncan <1i5t5.duncan cox.net> writes:
> > - How can I salvage this situation and convert to raid1?
> >
> > Unfortunately I have little spare drives left. Not enough to contain
> > 4.7TiB of data.. :(
>
> [OK, this goes a bit philosophical, but it's something to think about...]
>
> ...
>
> Any
18 matches
Mail list logo