On 14.09.2014 06:44, Hugo Mills wrote:
I've done this before, by accident (pulled the wrong drive, reinserted
it). You can fix it by running a scrub on the device (btrfs scrub
start /dev/ice, I think).
Checksums are done for each 4k block, so the increase in probability
of a false negative is
On 12.09.2014 12:47, Hugo Mills wrote:
I've done this before, by accident (pulled the wrong drive, reinserted
it). You can fix it by running a scrub on the device (btrfs scrub
start /dev/ice, I think).
I'd like to remind everyone that btrfs has weak checksums. It may be
good for correcting an
On Sun, Sep 14, 2014 at 05:15:08AM +0200, Piotr Pawłow wrote:
On 12.09.2014 12:47, Hugo Mills wrote:
I've done this before, by accident (pulled the wrong drive, reinserted
it). You can fix it by running a scrub on the device (btrfs scrub
start /dev/ice, I think).
I'd like to remind everyone
Hi,
I am testing BTRFS in a simple RAID1 environment. Default mount options and
data and metadata are mirrored between sda2 and sdb2. I have a few questions
and a potential bug report. I don't normally have console access to the server
so when the server boots with 1 of 2 disks, the mount will
On Fri, Sep 12, 2014 at 01:57:37AM -0700, shane-ker...@csy.ca wrote:
Hi,
I am testing BTRFS in a simple RAID1 environment. Default mount
options and data and metadata are mirrored between sda2 and sdb2. I
have a few questions and a potential bug report. I don't normally
have console access
shane-kernel posted on Fri, 12 Sep 2014 01:57:37 -0700 as excerpted:
[Last question first as it's easy to answer...]
Finally for those using this sort of setup in production, is running
btrfs on top of mdraid the way to go at this point?
While the latest kernel and btrfs-tools have removed