Hi,
I'm evaluating btrfs for a future deployment, and managed to
(repeatedly
) get btrfs to the state where the system can't mount, can't fsck and
can't
recover.
The test setup is pretty small, 6 devices of various size:
butter-1.5GA vg_dolt -wi-a
1.50g
butter-1.5GB vg_dolt
Making this with all 6 devices from the beginning and btrfsck doesn't
segfault. But it also doesn't repair the system enough to make it
mountable. ( nether does -o recover, however -o degraded works, and
files
are then accessible )
Not sure I entirely follow: mounting with -o degraded (not
On Sat, 29 Jun 2013, Martin m_bt...@ml1.co.uk wrote:
Mmmm... I'm not sure trying to balance historical read/write counts is
the way to go... What happens for the use case of an SSD paired up with
a HDD? (For example an SSD and a similarly sized Raptor or enterprise
SCSI?...) Or even just JBODs
On Fri, Jun 28, 2013 at 12:43:14PM -0700, Zach Brown wrote:
On Fri, Jun 28, 2013 at 12:37:45PM +0800, Liu Bo wrote:
Several users reported this crash of NULL pointer or general protection,
the story is that we add a rbtree for speedup ulist iteration, and we
use krealloc() to address ulist
On Fri, Jun 28, 2013 at 01:08:21PM -0400, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 10:25:39AM +0800, Liu Bo wrote:
Several users reported this crash of NULL pointer or general protection,
the story is that we add a rbtree for speedup ulist iteration, and we
use krealloc() to address ulist
On Fri, Jun 28, 2013 at 01:12:58PM -0400, Josef Bacik wrote:
I missed fixing the backref stuff when I introduced the skinny metadata. If
you
try and do things like snapshot aware defrag with skinny metadata you are
going
to see tons of warnings related to the backref count being less than
This is the btrfsck output for a real-world rsync backup onto a btrfs
raid1 mirror across 4 drives (yes, I know at the moment for btrfs raid1
there's only ever two copies of the data...)
checking extents
checking fs roots
root 5 inode 18446744073709551604 errors 2000
root 5 inode
On 29/06/13 10:41, Russell Coker wrote:
On Sat, 29 Jun 2013, Martin wrote:
Mmmm... I'm not sure trying to balance historical read/write counts is
the way to go... What happens for the use case of an SSD paired up with
a HDD? (For example an SSD and a similarly sized Raptor or enterprise
Martin posted on Sat, 29 Jun 2013 14:48:40 +0100 as excerpted:
This is the btrfsck output for a real-world rsync backup onto a btrfs
raid1 mirror across 4 drives (yes, I know at the moment for btrfs raid1
there's only ever two copies of the data...)
Being just a btrfs user I don't have a
We need to hold the tree mod log lock in __tree_mod_log_rewind since we walk
forward in the tree mod entries, otherwise we'll end up with random entries and
trip the BUG_ON() at the front of __tree_mod_log_rewind. This fixes the panics
people were seeing when running
find /whatever -type f -exec
10 matches
Mail list logo