On Tue, Aug 11, 2015 at 2:32 PM, Timothy Normand Miller
theo...@gmail.com wrote:
If I lose the array, I won't cry. The backup appears to be complete.
But it would be convenient to avoid having to restore from scratch,
and I'm hoping this might help you guys too in some way. I really
like
On Tue, Aug 11, 2015 at 3:00 PM, Timothy Normand Miller
theo...@gmail.com wrote:
On Tue, Aug 11, 2015 at 4:48 PM, Chris Murphy li...@colorremedies.com wrote:
The compress is ignored, and it looks like nodatasum and nodatacow
apply to everything. The nodatasum means no raid1 self-healing is
On Tue, Aug 11, 2015 at 5:24 PM, Chris Murphy li...@colorremedies.com wrote:
There is still data redundancy. Will a scrub at least notice that the
copies differ?
No, that's what I mean by nodatasum means no raid1 self-healing is
possible. You have data redundancy, but without checksums
On Tue, Aug 11, 2015 at 4:48 PM, Chris Murphy li...@colorremedies.com wrote:
The compress is ignored, and it looks like nodatasum and nodatacow
apply to everything. The nodatasum means no raid1 self-healing is
possible for any data on the entire volume. Metadata checksumming is
still
On Tue, Aug 11, 2015 at 09:42:10PM +0200, Holger Hoffstätte wrote:
I saw this morning that it went into integration-4.3:
https://git.kernel.org/cgit/linux/kernel/git/mason/linux-btrfs.git/commit/?h=integration-4.3id=293a8489f300536dc6d996c35a6ebb89aa03bab2
So probably just an oversight.
Ok
I have a recently installed an Arch Linux x86_64 system on a 50GB
btrfs partition and every time I try btrfs balance start it gives me
an enospc error even though I have less than 20% of the available
space full.
I have tried the recommended method (from
On 2015-08-11 07:08, Juan Orti Alcaine wrote:
Hello,
I have added a new disk to my filesystem and I'm doing a balance right
now, but I'm a bit worried that the disk usage does not get updated as
it should. I remember from earlier versions that you could see the
disk usage being balanced across
Hello,
I have added a new disk to my filesystem and I'm doing a balance right
now, but I'm a bit worried that the disk usage does not get updated as
it should. I remember from earlier versions that you could see the
disk usage being balanced across all disks.
These are the commands I've run:
#
2015-08-11 15:20 GMT+02:00 Austin S Hemmelgarn ahferro...@gmail.com:
How much slack space was allocated by BTRFS before running the balance (ie,
how big a difference was there between the allocated and used space), and
did the balance run to completion? If you had a lot of mostly empty chunks
On 08/11/2015 01:07 AM, Marc MERLIN wrote:
On Sun, Aug 02, 2015 at 08:51:30PM -0700, Marc MERLIN wrote:
On Fri, Jul 24, 2015 at 09:24:46AM -0700, Marc MERLIN wrote:
Screenshot:
On Tue, Aug 11, 2015 at 12:21 AM, Chris Murphy li...@colorremedies.com wrote:
On Mon, Aug 10, 2015 at 7:23 PM, Timothy Normand Miller
theo...@gmail.com wrote:
On Mon, Aug 10, 2015 at 6:52 PM, Chris Murphy li...@colorremedies.com
wrote:
- complete dmesg for the failed mount
It really
On Tue, Aug 11, 2015 at 1:56 PM, Timothy Normand Miller
theo...@gmail.com wrote:
On Tue, Aug 11, 2015 at 12:21 AM, Chris Murphy li...@colorremedies.com
wrote:
The entire dmesg is still useful because it should show libata errors
if these aren't fully failed drives. So you should file a bug
Hi,
In an early thread Duncan mentioned that btrfs does not scale well in
the number of subvolumes (including snapshots). He recommended
keeping the total number under 1000. I just wanted to understand this
limitation further. Is this something that has been resolved or will
be resolved in the
On Fri, Aug 07, 2015 at 10:11:46AM +0200, Holger Hoffstätte wrote:
Mark's patch titled
[PATCH 3/5] btrfs: fix clone / extent-same deadlocks [1]
from his btrfs: dedupe fixes, features series is missing from the
integration-4.2 tree and 4.2-rc5, where it still applies cleanly (as of 5
On 08/11/15 20:58, Mark Fasheh wrote:
On Fri, Aug 07, 2015 at 10:11:46AM +0200, Holger Hoffstätte wrote:
Mark's patch titled
[PATCH 3/5] btrfs: fix clone / extent-same deadlocks [1]
from his btrfs: dedupe fixes, features series is missing from the
integration-4.2 tree and 4.2-rc5, where
On Tue, Aug 11, 2015 at 3:57 PM, Chris Murphy li...@colorremedies.com wrote:
On Tue, Aug 11, 2015 at 12:04 PM, Timothy Normand Miller
theo...@gmail.com wrote:
https://bugzilla.kernel.org/show_bug.cgi?id=102691
[7.729124] BTRFS: device fsid ecdff84d-b4a2-4286-a1c1-cd7e5396901c
devid 2
On Tue, Aug 11, 2015 at 12:04 PM, Timothy Normand Miller
theo...@gmail.com wrote:
https://bugzilla.kernel.org/show_bug.cgi?id=102691
[7.729124] BTRFS: device fsid ecdff84d-b4a2-4286-a1c1-cd7e5396901c
devid 2 transid 226237 /dev/sdd
[7.746115] BTRFS: device fsid
On Tue, Aug 11, 2015 at 11:56 AM, Timothy Normand Miller
theo...@gmail.com wrote:
On Tue, Aug 11, 2015 at 12:21 AM, Chris Murphy li...@colorremedies.com
wrote:
I don't see nodatacow in your fstab, so I don't know why that's
happening. That means no checksumming for data.
Sorry. I was
If someone can answer Tristan's question, can they also add in if
large volumes of frequently created and destroyed snapshots/subvolumes
will cause issues? Or, if they're deleted quickly after being made,
is it just the number that exists at any given time that matters?
(Building source in chroot
On Tue, Aug 11, 2015 at 2:26 PM, Timothy Normand Miller
theo...@gmail.com wrote:
On Tue, Aug 11, 2015 at 3:47 PM, Chris Murphy li...@colorremedies.com wrote:
Huh. I thought nodatacow applies to an entire volume only, not per
subvolume unless you use chattr +C (in which case it can be per
Timothy Normand Miller posted on Tue, 11 Aug 2015 17:32:12 -0400 as
excerpted:
On Tue, Aug 11, 2015 at 5:24 PM, Chris Murphy li...@colorremedies.com
wrote:
There is still data redundancy. Will a scrub at least notice that the
copies differ?
No, that's what I mean by nodatasum means no
Catalin posted on Tue, 11 Aug 2015 12:18:28 +0300 as excerpted:
I have a recently installed an Arch Linux x86_64 system on a 50GB btrfs
partition and every time I try btrfs balance start it gives me an enospc
error even though I have less than 20% of the available space full.
I have tried
Tristan Zajonc posted on Tue, 11 Aug 2015 11:33:45 -0700 as excerpted:
In an early thread Duncan mentioned that btrfs does not scale well in
the number of subvolumes (including snapshots). He recommended keeping
the total number under 1000. I just wanted to understand this
limitation
Russell Coker posted on Wed, 12 Aug 2015 13:04:27 +1000 as excerpted:
Linux Software RAID scrub will copy the data from one disk to the other
to make them identical, the theory is that it's best to at least be
consistent if you can't be sure you are right.
Will a BTRFS scrub do this on a
24 matches
Mail list logo