Hello,
ok, thanks for the explaination.
I would find a behaviour in which by default all configurations would be
used (i.e. no -c option means that a snapshot of all configurations will
be done) more intuitive.
I'll get used to it though :-)
Greetings,
Hendrik
--
To unsubscribe from this lis
Здpaвcmвyйтe! Вac uнтepecyюm kлиeнтckие бaзы дaнныx?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Mar 15, 2014 at 03:05:22PM +0100, Hendrik Friedel wrote:
> When I install snapper I configure it like this
> snapper -c rt create-config /
> snapper -c home create-config /home
> snapper -c root create-config /root
> snapper -c Video create-config /mnt/BTRFS/Video/
Just a recommendation a
It's possible to change the parent/child relationship between directories
in such a way that if a child directory has a higher inode number than
its parent, it doesn't necessarily means the child rename/move operation
can be performed immediately. The parent migth have its own rename/move
operation
Regression test for a btrfs incremental send issue where the kernel entered
an infinite loop building a path string. The minimal sequence of steps to
trigger this issue are:
$ umount /mnt
$ mkfs.btrfs -f /dev/sdd
$ mount /dev/sdd /mnt
$ mkdir /mnt/A
$ mkdir /mnt/B
> "Chris" == Chris Samuel writes:
Chris> SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Chris> Of course that's what the drive is reporting it supports, I'm not
Chris> sure whether that's the result of what has been negotiated
Chris> between the controller and drive or purely what t
> "Chris" == Chris Samuel writes:
Chris> It looks like drives that do support it can be detected with the
Chris> kernel helper function ata_fpdma_dsm_supported() defined in
Chris> include/linux/libata.h.
Chris> I wonder if it would be possible to use that knowledge to extend
Chris> the smart
Hello,
> Just a recommendation about the config names. At least on
> openSUSE "root" is used for /. I would suggest to use "home_root"
> for /root like the pam-snapper module does.
thanks for the advise.
In fact on a previous try I had -by chance- used exactly this
nomenclature. Then I restart
On Mar 16, 2014, at 12:06 AM, Marc MERLIN wrote:
>
> Mmmh, so now I'm confused.
>
> See this:
>
> === START OF INFORMATION SECTION ===
> Device Model: INTEL SSDSC2BW180A3L
> Serial Number:CVCV215200XU180EGN
> LU WWN Device Id: 5 001517 bb28c5317
> Firmware Version: LE1i
> User Capacity
It's possible to change the parent/child relationship between directories
in such a way that if a child directory has a higher inode number than
its parent, it doesn't necessarily means the child rename/move operation
can be performed immediately. The parent migth have its own rename/move
operation
On Sun, Mar 16, 2014 at 05:58:10PM +0100, Hendrik Friedel wrote:
> > Just a recommendation about the config names. At least on
> > openSUSE "root" is used for /. I would suggest to use "home_root"
> > for /root like the pam-snapper module does.
>
> thanks for the advise.
>
> In fact on a previous
On Sun, Mar 16, 2014 at 12:22:05PM -0400, Martin K. Petersen wrote:
> queued trim, not even a prototype. I went out and bought a 840 EVO this
> morning because the general lazyweb opinion seemed to indicate that this
> drive supports queued trim. Well, it doesn't. At least not in the 120GB
> versio
It's possible to change the parent/child relationship between directories
in such a way that if a child directory has a higher inode number than
its parent, it doesn't necessarily means the child rename/move operation
can be performed immediately. The parent migth have its own rename/move
operation
I just created this array:
polgara:/mnt/btrfs_backupcopy# btrfs fi show
Label: backupcopy uuid: 7d8e1197-69e4-40d8-8d86-278d275af896
Total devices 10 FS bytes used 220.32GiB
devid1 size 465.76GiB used 25.42GiB path /dev/dm-0
devid2 size 465.76GiB used 25.40GiB path
On Mar 16, 2014, at 4:20 PM, Marc MERLIN wrote:
> If I yank, sde1 and reboot, the array will not come back up from what I
> understand,
> or is that incorrect?
> Do rebuilds work at all with a missing drive to a spare drive?
The part that isn't working well enough is faulty status. The drive
On Mar 16, 2014, at 4:55 PM, Chris Murphy wrote:
> Then use btrfs replace start.
Looks like in 3.14rc6 replace isn't yet supported. I get "dev_replace cannot
yet handle RAID5/RAID6".
When I do:
btrfs device add
The command hangs, no kernel messages.
Chris Murphy
--
To unsubscribe from th
On Sun, Mar 16, 2014 at 05:12:10PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 4:55 PM, Chris Murphy wrote:
>
> > Then use btrfs replace start.
>
> Looks like in 3.14rc6 replace isn't yet supported. I get "dev_replace cannot
> yet handle RAID5/RAID6".
>
> When I do:
> btrfs device add
On Mar 16, 2014, at 5:12 PM, Chris Murphy wrote:
>
> On Mar 16, 2014, at 4:55 PM, Chris Murphy wrote:
>
>> Then use btrfs replace start.
>
> Looks like in 3.14rc6 replace isn't yet supported. I get "dev_replace cannot
> yet handle RAID5/RAID6".
>
> When I do:
> btrfs device add
>
> The
On Mar 16, 2014, at 5:17 PM, Marc MERLIN wrote:
> - but no matter how I remove the faulty drive, there is no rebuild on a
> new drive procedure that works yet
>
> Correct?
I'm not sure. From what I've read we should be able to add a device to raid5/6,
but I don't know if it's expected we can
On Sun, Mar 16, 2014 at 4:17 PM, Marc MERLIN wrote:
> On Sun, Mar 16, 2014 at 05:12:10PM -0600, Chris Murphy wrote:
>>
>> On Mar 16, 2014, at 4:55 PM, Chris Murphy wrote:
>>
>> > Then use btrfs replace start.
>>
>> Looks like in 3.14rc6 replace isn't yet supported. I get "dev_replace cannot
>> y
I'm not sure if this is a bug or expected at this point.
Create and populate 3x 8TB virtual devices
Boot kernel 3.13.6-200.fc20.x86_64
btrfs-progs v3.12
mkfs.btrfs -d raid5 -m raid1 /dev/sd[bcd]
mount /mnt
cp -a /mnt/
umount /mnt
poweroff
In VM, remove sdd virtual disk and replace with blank ima
On Sun, Mar 16, 2014 at 05:23:25PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 5:17 PM, Marc MERLIN wrote:
>
> > - but no matter how I remove the faulty drive, there is no rebuild on a
> > new drive procedure that works yet
> >
> > Correct?
>
> I'm not sure. From what I've read we shou
On Mar 16, 2014, at 6:51 PM, Marc MERLIN wrote:
>
>
> polgara:/mnt/btrfs_backupcopy# btrfs device delete /dev/mapper/crypt_sde1
> `pwd`
> ERROR: error removing the device '/dev/mapper/crypt_sde1' - Invalid argument
You didn't specify a mount point, is the reason for that error. But also, sinc
On Sun, Mar 16, 2014 at 07:06:23PM -0600, Chris Murphy wrote:
>
> On Mar 16, 2014, at 6:51 PM, Marc MERLIN wrote:
> >
> >
> > polgara:/mnt/btrfs_backupcopy# btrfs device delete /dev/mapper/crypt_sde1
> > `pwd`
> > ERROR: error removing the device '/dev/mapper/crypt_sde1' - Invalid argument
>
For an incremental send, fix the process of determining whether the directory
inode we're currently processing needs to have its move/rename operation
delayed.
We were ignoring the fact that if the inode's new immediate ancestor has a
higher
inode number than ours but wasn't renamed/moved, we mi
Regression test for a btrfs incremental send issue where the kernel entered
an infinite loop building a path string. This happened when either of the 2
following cases happened:
1) A directory was made a child of another directory which has a lower inode
number and has a pending move/rename ope
On Mar 16, 2014, at 7:17 PM, Marc MERLIN wrote:
> On Sun, Mar 16, 2014 at 07:06:23PM -0600, Chris Murphy wrote:
>>
>> On Mar 16, 2014, at 6:51 PM, Marc MERLIN wrote:
>>>
>>>
>>> polgara:/mnt/btrfs_backupcopy# btrfs device delete /dev/mapper/crypt_sde1
>>> `pwd`
>>> ERROR: error removing the
On Sun, Mar 16, 2014 at 08:56:35PM -0600, Chris Murphy wrote:
> >>> polgara:/mnt/btrfs_backupcopy# btrfs device delete /dev/mapper/crypt_sde1
> >>> `pwd`
> >>> ERROR: error removing the device '/dev/mapper/crypt_sde1' - Invalid
> >>> argument
> >>
> >> You didn't specify a mount point, is the re
On Mar 16, 2014, at 9:44 PM, Marc MERLIN wrote:
> On Sun, Mar 16, 2014 at 08:56:35PM -0600, Chris Murphy wrote:
>
>>> If I add a device, isn't it going to grow my raid to make it bigger instead
>>> of trying to replace the bad device?
>>
>> Yes if it's successful. No if it fails which is the p
On Thu, Mar 06, 2014 at 09:33:24PM +, Duncan wrote:
> However, best snapshot management practice does progressive snapshot
> thinning, so you never have more than a few hundred snapshots to manage
> at once. Think of it this way. If you realize you deleted something you
> needed yesterday,
Hi,
on one of my servers running btrfs, I noticed a very high load of
26/26/26. After investigating further, this happened in my logs about
5 minutes before the monitoring alerted me because of the load:
[ cut here ]
WARNING: CPU: 2 PID: 3046 at fs/btrfs/ctree.c:13
31 matches
Mail list logo