Hi, Marc
Raid0 is not redundant in any way. See inline below.
On 2014/05/04 01:27 AM, Marc MERLIN wrote:
So, I was thinking. In the past, I've done this:
mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*
My rationale at the time was that if I lose a drive, I'll still have
full
On 2014/05/04 02:47 AM, Marc MERLIN wrote:
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/usr /usr
?
Thanks,
Marc
There are two issues with this.
1) There will be a *very* small performance
On 2014/05/04 05:12 AM, Marc MERLIN wrote:
Another question I just came up with.
If I have historical snapshots like so:
backup
backup.sav1
backup.sav2
backup.sav3
If I want to copy them up to another server, can btrfs send/receive
let me copy all of the to another btrfs pool while keeping the
On Sun, May 04, 2014 at 08:57:19AM +0200, Brendan Hide wrote:
Hi, Marc
Raid0 is not redundant in any way. See inline below.
Thanks for clearing things up.
But now I have 2 questions
1) btrfs has two copies of all metadata on even a single drive, correct?
Only when *specifically* using
On 2014/05/04 05:27 AM, Duncan wrote:
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as excerpted:
Are there any plans for a feature like the ZFS copies= option?
I'd like to be able to set copies= separately for data and metadata. In
most cases RAID-1 provides adequate data
On Sun, May 04, 2014 at 09:16:02AM +0200, Brendan Hide wrote:
Sending one-at-a-time, the shared-data relationship will be kept by
using the -p (parent) parameter. Send will only send the differences
and receive will create a new snapshot, adjusting for those
differences, even when the receive
On 2014/05/04 09:24 AM, Marc MERLIN wrote:
On Sun, May 04, 2014 at 08:57:19AM +0200, Brendan Hide wrote:
Hi, Marc
Raid0 is not redundant in any way. See inline below.
Thanks for clearing things up.
But now I have 2 questions
1) btrfs has two copies of all metadata on even a single drive,
I've just updated
https://btrfs.wiki.kernel.org/index.php/FAQ#Does_Btrfs_work_on_top_of_dm-crypt.3F
to point to
http://marc.merlins.org/perso/btrfs/post_2014-04-27_Btrfs-Multi-Device-Dmcrypt.html
where I give this script:
http://marc.merlins.org/linux/scripts/start-btrfs-dmcrypt
which shows one
On 2014/05/04 09:28 AM, Marc MERLIN wrote:
On Sun, May 04, 2014 at 09:16:02AM +0200, Brendan Hide wrote:
Sending one-at-a-time, the shared-data relationship will be kept by
using the -p (parent) parameter. Send will only send the differences
and receive will create a new snapshot, adjusting for
This has been asked a few times, so I ended up writing a blog entry on
it
http://marc.merlins.org/perso/btrfs/post_2014-04-26_Btrfs-Tips_-Cancel-A-Btrfs-Scrub-That-Is-Already-Stopped.html
and in the end pasted all of it in the main wiki
On Sun, May 04, 2014 at 11:12:38AM -0700, Duncan wrote:
On Sun, 04 May 2014 09:27:10 +0200
Brendan Hide bren...@swiftspirit.co.za wrote:
On 2014/05/04 05:27 AM, Duncan wrote:
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as
excerpted:
Are there any plans for a feature
On Sun, 04 May 2014 09:27:10 +0200
Brendan Hide bren...@swiftspirit.co.za wrote:
On 2014/05/04 05:27 AM, Duncan wrote:
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as
excerpted:
Are there any plans for a feature like the ZFS copies= option?
I'd like to be able to set
Marc MERLIN posted on Sat, 03 May 2014 16:27:02 -0700 as excerpted:
So, I was thinking. In the past, I've done this:
mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*
My rationale at the time was that if I lose a drive, I'll still have
full metadata for the entire filesystem
Actually, never mind Suse, does someone know whether you can revert to
an older snapshot in place?
The only way I can think of is to mount the snapshot on top of the other
filesystem. This gets around the umounting a filesystem with open
filehandles problem, but this also means that you have to
On Sun, May 04, 2014 at 04:26:45PM -0700, Marc MERLIN wrote:
Actually, never mind Suse, does someone know whether you can revert to
an older snapshot in place?
Not while the system's running useful services, no.
The only way I can think of is to mount the snapshot on top of the other
On 05/04/2014 12:24 AM, Marc MERLIN wrote:
Gotcha, thanks for confirming, so -m raid1 -d raid0 really only protects
against metadata corruption or a single block loss, but otherwise if you
lost a drive in a 2 drive raid0, you'll have lost more than just half
your files.
The scenario you
On Sun, May 04, 2014 at 09:54:38AM +0200, Brendan Hide wrote:
Yes, -p (parent) and -c (clone source) are the only ways I'm aware
of to push subvolumes across while ensuring data-sharing
relationship remains intact. This will end up being much the same as
doing incremental backups:
From the
More slides, more questions, sorry :)
(thanks for the other answers, I'm still going through them)
If I have:
gandalfthegreat:~# btrfs fi show
Label: 'btrfs_pool1' uuid: 873d526c-e911-4234-af1b-239889cd143d
Total devices 1 FS bytes used 214.44GB
devid1 size 231.02GB used
On Sun, May 04, 2014 at 09:44:41AM +0200, Brendan Hide wrote:
Ah, I see the man page now This is because SSDs can remap blocks
internally so duplicate blocks could end up in the same erase block
which negates the benefits of doing metadata duplication.
You can force dup but, per the man
On May 4, 2014, at 5:26 PM, Marc MERLIN m...@merlins.org wrote:
Actually, never mind Suse, does someone know whether you can revert to
an older snapshot in place?
They are using snapper. Updates are not atomic, that is they are applied to the
currently mounted fs, not the snapshot, and after
On 2014/05/05 02:54 AM, Marc MERLIN wrote:
More slides, more questions, sorry :)
(thanks for the other answers, I'm still going through them)
If I have:
gandalfthegreat:~# btrfs fi show
Label: 'btrfs_pool1' uuid: 873d526c-e911-4234-af1b-239889cd143d
Total devices 1 FS bytes used
On 2014/05/05 02:56 AM, Marc MERLIN wrote:
On Sun, May 04, 2014 at 09:07:55AM +0200, Brendan Hide wrote:
On 2014/05/04 02:47 AM, Marc MERLIN wrote:
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind
On Mon, 05 May 2014 06:13:30 +0200
Brendan Hide bren...@swiftspirit.co.za wrote:
1) There will be a *very* small performance penalty (negligible, really)
Oh, really, it's slower to mount the device directly? Not that I really
care, but that's unexpected.
Um ... the penalty is if you're
On Mon, May 05, 2014 at 06:13:30AM +0200, Brendan Hide wrote:
Oh, really, it's slower to mount the device directly? Not that I really
care, but that's unexpected.
Um ... the penalty is if you're mounting indirectly. ;)
I'd be willing to believe that more then :)
(but indeed, if slowdown
24 matches
Mail list logo