On 03/01/2015 14:11, Duncan wrote:
Bob Marley posted on Sat, 03 Jan 2015 12:34:41 +0100 as excerpted:
On 29/12/2014 19:56, sys.syphus wrote:
specifically (P)arity. very specifically n+2. when will raid5 & raid6
be at least as safe to run as raid1 currently is? I don't like the idea
o
On 29/12/2014 19:56, sys.syphus wrote:
specifically (P)arity. very specifically n+2. when will raid5 & raid6
be at least as safe to run as raid1 currently is? I don't like the
idea of being 2 bad drives away from total catastrophe.
(and yes i backup, it just wouldn't be fun to go down that route
On 22/10/2014 14:40, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłow wrote:
Looks normal to me. Last time I started a balance after adding 6th
device to my FS, it took 4 days to move 25GBs of data.
It's long term untenable. At some point i
On 10/10/2014 16:37, Chris Murphy wrote:
The fail safe behavior is to treat the known good tree root as the default tree
root, and bypass the bad tree root if it cannot be repaired, so that the volume
can be mounted with default mount options (i.e. the ones in fstab). Otherwise
it's a filesyst
On 10/10/2014 12:59, Roman Mamedov wrote:
On Fri, 10 Oct 2014 12:53:38 +0200
Bob Marley wrote:
On 10/10/2014 03:58, Chris Murphy wrote:
* mount -o recovery
"Enable autorecovery attempts if a bad tree root is found at mount
time."
I'm confused why it's not th
On 10/10/2014 03:58, Chris Murphy wrote:
* mount -o recovery
"Enable autorecovery attempts if a bad tree root is found at mount
time."
I'm confused why it's not the default yet. Maybe it's continuing to evolve at a
pace that suggests something could sneak in that makes things worse?
On 04/10/2014 12:36, Bob Marley wrote:
On 04/10/2014 12:26, Bob Marley wrote:
Hello,
apparently I have found an issue with btrfs
Sorry I forgot to mention the kernel version: 3.14.19
not tested with higher versions
I just noticed that the page I have linked which also reports the problem
On 04/10/2014 12:26, Bob Marley wrote:
Hello,
apparently I have found an issue with btrfs
Sorry I forgot to mention the kernel version: 3.14.19
not tested with higher versions
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
Hello,
apparently I have found an issue with btrfs: performance reduces with
nodatasum and multi-device "raid0" or "single".
I was testing with a series of 8 LIO ramdisks, with btrfs on those in
multi-device single mode, and writing zeroes on the filesystem with 16
dd in parallel.
Performance
On 20/07/2014 21:36, Roman Mamedov wrote:
On Sun, 20 Jul 2014 21:15:31 +0200
Bob Marley wrote:
Hi TM, are you doing other significant filesystem activity during this
rebuild, especially random accesses?
This can reduce performances a lot on HDDs.
E.g. if you were doing strenous multithreaded
On 20/07/2014 10:45, TM wrote:
Hi,
I have a raid10 with 4x 3TB disks on a microserver
http://n40l.wikia.com/wiki/Base_Hardware_N54L , 8Gb RAM
Recently one disk started to fail (smart errors), so I replaced it
Mounted as degraded, added new disk, removed old
Started yesterday
I am monitoring /va
Hi, I hadn't noticed this post,
I think I know the reason this time : you have used USB you bad guy!
I think USB does not support flush / barrier , which is mandatory for
BTRFS to work correctly in case of power loss.
For most filesystems actually, but the damages suffered by COW
filesystems suc
On 20/01/2014 15:57, Ian Hinder wrote:
i.e. that there is parity information stored with every piece of data,
and ZFS will "correct" errors automatically from the parity information.
So this is not just parity data to check correctness but there are many
more additional bits to actually corre
On 22/10/2013 10:37, Stefan Behrens wrote:
I don't believe that this issue can ever happen. I don't believe that
somewhere on the path to the flash memory, to the magnetic disc or to
the drive's cache memory, someone interrupts a 4KB write in the middle
of operation to read from this 4KB area. Th
On 19/10/2013 16:03, Stefan Behrens wrote:
On 10/19/2013 12:32, Shilong Wang wrote:
> Yeah, it did not hurt. but it may output checksum mismatch. For
example:
> Writing 4k superblock is not totally finished, but we are trying to
scrub it.
Have you ever seen this issue?
...
If this is re
On 23/05/2013 15:22, Bernd Schubert wrote:
Yeah, I know and I'm using iostat already. md raid6 does not do rmw,
but does not fill the device queue, afaik it flushes the underlying
devices quickly as it does not have barrier support - that is another
topic, but was the reason why I started to
On 12/09/12 12:38, Hugo Mills wrote:
On Sun, Dec 09, 2012 at 12:20:46PM +0100, Swâmi Petaramesh wrote:
Le 09/12/2012 11:41, Roman Mamedov a écrit :
CoW filesystem incurs fragmentation by its nature, not specifically snapshots.
Even without snapshots, rewriting portions of existing files will wr
On 11/10/12 22:23, Hugo Mills wrote:
The closest thing is btrfsck. That's about as picky as we've got to
date.
What exactly is your use-case for this requirement?
We need a decently-available system. We can rollback filesystem to
last-known-good if the "test" detects an inconsistency
Hello all
I would like to know if there exists a tool to check the btrfs
filesystem very thoroughly.
It's ok if it needs the FS unmounted to operate. Also mounted is OK.
It does not need repair capability
It needs very good checking capability: it has to return Good / Bad
status with the "Bad"
Hello all btrfs developers
I would really appreciate a systemcall (or ioctl or the like) to allow
deduplication of a block of a file against a block of another file.
(ok if blocks need to be aligned to filesystem blocks)
So that if I know that bytes 32768...65536 of FileA are identical to
by
20 matches
Mail list logo