On Sun, May 04, 2014 at 09:23:12PM -0600, Chris Murphy wrote:
>
> On May 4, 2014, at 5:26 PM, Marc MERLIN wrote:
>
> > Actually, never mind Suse, does someone know whether you can revert to
> > an older snapshot in place?
>
> They are using snapper. Updates are not atomic, that is they
> are ap
On 05/05/14 06:36, Roman Mamedov wrote:
On Mon, 05 May 2014 06:13:30 +0200
Brendan Hide wrote:
1) There will be a *very* small performance penalty (negligible, really)
Oh, really, it's slower to mount the device directly? Not that I really
care, but that's unexpected.
Um ... the penalty is i
On Mon, May 05, 2014 at 06:11:28AM +0200, Brendan Hide wrote:
> The "per-device" used amount refers to the amount of space that has
> been allocated to chunks. That first one probably needs a balance.
> Btrfs doesn't behave very well when available diskspace is so low
> due to the fact that it cann
On Mon, May 05, 2014 at 06:13:30AM +0200, Brendan Hide wrote:
> >Oh, really, it's slower to mount the device directly? Not that I really
> >care, but that's unexpected.
>
> Um ... the penalty is if you're mounting indirectly. ;)
I'd be willing to believe that more then :)
(but indeed, if slowdown
On Mon, May 05, 2014 at 01:36:39AM +0100, Hugo Mills wrote:
>I'm guessing it involves reflink copies of files from the snapshot
> back to the "original", and then restarting affected services. That's
> about the only other thing that I can think of, but it's got load of
> race conditions in it
On Sun, May 04, 2014 at 05:46:00PM -0700, Daniel Lee wrote:
> This often seems to confuse people and I think there is a common
> misconception that the btrfs raid/single/dup features work at the file
> level when in reality they work at a level closer to lvm/md.
>
> If someone told you that they l
On Mon, 05 May 2014 06:13:30 +0200
Brendan Hide wrote:
> >> 1) There will be a *very* small performance penalty (negligible, really)
> > Oh, really, it's slower to mount the device directly? Not that I really
> > care, but that's unexpected.
>
> Um ... the penalty is if you're mounting indirectl
On 2014/05/05 02:56 AM, Marc MERLIN wrote:
On Sun, May 04, 2014 at 09:07:55AM +0200, Brendan Hide wrote:
On 2014/05/04 02:47 AM, Marc MERLIN wrote:
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/us
On 2014/05/05 02:54 AM, Marc MERLIN wrote:
More slides, more questions, sorry :)
(thanks for the other answers, I'm still going through them)
If I have:
gandalfthegreat:~# btrfs fi show
Label: 'btrfs_pool1' uuid: 873d526c-e911-4234-af1b-239889cd143d
Total devices 1 FS bytes used 214.44G
On May 4, 2014, at 5:26 PM, Marc MERLIN wrote:
> Actually, never mind Suse, does someone know whether you can revert to
> an older snapshot in place?
They are using snapper. Updates are not atomic, that is they are applied to the
currently mounted fs, not the snapshot, and after update the sys
On Sun, May 04, 2014 at 09:44:41AM +0200, Brendan Hide wrote:
> >Ah, I see the man page now "This is because SSDs can remap blocks
> >internally so duplicate blocks could end up in the same erase block
> >which negates the benefits of doing metadata duplication."
>
> You can force dup but, per the
More slides, more questions, sorry :)
(thanks for the other answers, I'm still going through them)
If I have:
gandalfthegreat:~# btrfs fi show
Label: 'btrfs_pool1' uuid: 873d526c-e911-4234-af1b-239889cd143d
Total devices 1 FS bytes used 214.44GB
devid1 size 231.02GB used 231.0
On Sun, May 04, 2014 at 09:07:55AM +0200, Brendan Hide wrote:
> On 2014/05/04 02:47 AM, Marc MERLIN wrote:
> >Is there any functional difference between
> >
> >mount -o subvol=usr /dev/sda1 /usr
> >and
> >mount /dev/sda1 /mnt/btrfs_pool
> >mount -o bind /mnt/btrfs_pool/usr /usr
> >
> >?
> >
> >Than
On Sun, May 04, 2014 at 09:54:38AM +0200, Brendan Hide wrote:
> Yes, -p (parent) and -c (clone source) are the only ways I'm aware
> of to push subvolumes across while ensuring data-sharing
> relationship remains intact. This will end up being much the same as
> doing incremental backups:
> From th
On 05/04/2014 12:24 AM, Marc MERLIN wrote:
>
> Gotcha, thanks for confirming, so -m raid1 -d raid0 really only protects
> against metadata corruption or a single block loss, but otherwise if you
> lost a drive in a 2 drive raid0, you'll have lost more than just half
> your files.
>
>> The scenari
On Sun, May 04, 2014 at 04:26:45PM -0700, Marc MERLIN wrote:
> Actually, never mind Suse, does someone know whether you can revert to
> an older snapshot in place?
Not while the system's running useful services, no.
> The only way I can think of is to mount the snapshot on top of the other
> f
Actually, never mind Suse, does someone know whether you can revert to
an older snapshot in place?
The only way I can think of is to mount the snapshot on top of the other
filesystem. This gets around the umounting a filesystem with open
filehandles problem, but this also means that you have to kee
Marc MERLIN posted on Sat, 03 May 2014 16:27:02 -0700 as excerpted:
> So, I was thinking. In the past, I've done this:
> mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*
>
> My rationale at the time was that if I lose a drive, I'll still have
> full metadata for the entire filesyst
On Sun, 04 May 2014 09:27:10 +0200
Brendan Hide wrote:
> On 2014/05/04 05:27 AM, Duncan wrote:
> > Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as
> > excerpted:
> >
> >> Are there any plans for a feature like the ZFS copies= option?
> >>
> >> I'd like to be able to set copies= separat
On Sun, May 04, 2014 at 11:12:38AM -0700, Duncan wrote:
> On Sun, 04 May 2014 09:27:10 +0200
> Brendan Hide wrote:
>
> > On 2014/05/04 05:27 AM, Duncan wrote:
> > > Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as
> > > excerpted:
> > >
> > >> Are there any plans for a feature like the
On 2014/05/04 09:28 AM, Marc MERLIN wrote:
On Sun, May 04, 2014 at 09:16:02AM +0200, Brendan Hide wrote:
Sending one-at-a-time, the shared-data relationship will be kept by
using the -p (parent) parameter. Send will only send the differences
and receive will create a new snapshot, adjusting for
This has been asked a few times, so I ended up writing a blog entry on
it
http://marc.merlins.org/perso/btrfs/post_2014-04-26_Btrfs-Tips_-Cancel-A-Btrfs-Scrub-That-Is-Already-Stopped.html
and in the end pasted all of it in the main wiki
https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#btrfs_scru
I've just updated
https://btrfs.wiki.kernel.org/index.php/FAQ#Does_Btrfs_work_on_top_of_dm-crypt.3F
to point to
http://marc.merlins.org/perso/btrfs/post_2014-04-27_Btrfs-Multi-Device-Dmcrypt.html
where I give this script:
http://marc.merlins.org/linux/scripts/start-btrfs-dmcrypt
which shows one way
On 2014/05/04 09:24 AM, Marc MERLIN wrote:
On Sun, May 04, 2014 at 08:57:19AM +0200, Brendan Hide wrote:
Hi, Marc
Raid0 is not redundant in any way. See inline below.
Thanks for clearing things up.
But now I have 2 questions
1) btrfs has two copies of all metadata on even a single drive,
On Sun, May 04, 2014 at 09:16:02AM +0200, Brendan Hide wrote:
> Sending one-at-a-time, the shared-data relationship will be kept by
> using the -p (parent) parameter. Send will only send the differences
> and receive will create a new snapshot, adjusting for those
> differences, even when the recei
On 2014/05/04 05:27 AM, Duncan wrote:
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as excerpted:
Are there any plans for a feature like the ZFS copies= option?
I'd like to be able to set copies= separately for data and metadata. In
most cases RAID-1 provides adequate data protectio
On Sun, May 04, 2014 at 08:57:19AM +0200, Brendan Hide wrote:
> Hi, Marc
>
> Raid0 is not redundant in any way. See inline below.
Thanks for clearing things up.
> >But now I have 2 questions
> >1) btrfs has two copies of all metadata on even a single drive, correct?
>
> Only when *specifically
On 2014/05/04 05:12 AM, Marc MERLIN wrote:
Another question I just came up with.
If I have historical snapshots like so:
backup
backup.sav1
backup.sav2
backup.sav3
If I want to copy them up to another server, can btrfs send/receive
let me copy all of the to another btrfs pool while keeping the
On 2014/05/04 02:47 AM, Marc MERLIN wrote:
Is there any functional difference between
mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/usr /usr
?
Thanks,
Marc
There are two "issues" with this.
1) There will be a *very* small performance pena
29 matches
Mail list logo