Got this with 3.14.4 - is it expected?
There was a balance running on this servers just a few hours ago (added device
to fs and converted to RAID-1), which finished successfully.
Full trace at http://www.virtall.com/files/temp/btrfs.txt.
May 31 04:10:13 backup01 kernel: [1487611.493240] BTRFS i
Shaun Reich posted on Sat, 31 May 2014 23:51:26 -0400 as excerpted:
> at some point, my /home randomly(?) went into -ro as i noticed writes
> were not working. Checked dmesg which had some backtraces which i
> ignored. So I simply rebooted, only to find out it wouldn't come back.
>
> so now my /h
Regression test for the btrfs ioctl clone operation when the source range
contains hole(s) and the FS has the NO_HOLES feature enabled (file holes
don't need file extent items in the btree to represent them).
This issue is fixed by the following linux kernel btrfs patch:
Btrfs: fix clone to d
> How do you know it's a bad superblock? While I'm not a dev just a list
> regular, show-super looks reasonable from here, and find-root does find
> what appears to be a good root. From here the problem seems to be a bad
> ctree (of several), not a bad superblock.
yes i think you're right. i thi
I have a question that has arisen from reading one of Duncan's posts:
On 06/01/2014 01:56 AM, Duncan wrote:
> Here's the deal. Due to scaling issues the original snapshot aware
> defrag code was recently disabled, so defrag now doesn't worry about
> snapshots, only defragging whatever is curre
On Wed, May 28, 2014 at 04:56:56PM +0200, Torbjørn wrote:
> On 05/28/2014 03:41 PM, Chris Mason wrote:
> >On 05/28/2014 01:53 AM, Torbjørn wrote:
> >
> >>It's actually a raid10 array of 11 dm-crypt devices.
> >>I'm able to read data from the array (accessing files), and also read
> >>directly from
Peter Chant posted on Sun, 01 Jun 2014 21:39:18 +0100 as excerpted:
> I have a question that has arisen from reading one of Duncan's posts:
>
> On 06/01/2014 01:56 AM, Duncan wrote:
>
>> Here's the deal. Due to scaling issues the original snapshot aware
>> defrag code was recently disabled, so
On Sat, May 31, 2014 at 6:51 PM, Brendan Hide wrote:
> On 2014/05/31 12:00 AM, Martin wrote:
>>
>> OK... I'll jump in...
>>
>> On 30/05/14 21:43, Josef Bacik wrote:
>>>
>>> [snip]
>>>
>>> Option 1: Only relink inodes that haven't changed since the snapshot was
>>> taken.
>>>
>>> Pros:
>>> -Faster
Hi,
I'm getting "blocked for more than 120 seconds" messages too.
In my case, it is for a simple RAID1 volume that is rebuilding for the
first time (new volume), whilst mounted as a LUKS crypt volume and
writing /dev/zero before actual deployment.
I am running a stock Wheezy kernel:
~ # uname