Brad Templeton posted on Sat, 26 May 2018 19:21:57 -0700 as excerpted:
> Certainly. My apologies for not including them before.
Aieee! Reply before quote, making the reply out of context, and my
attempt to reply in context... difficult and troublesome.
Please use standard list context-quote,
23.05.2018 09:32, Nikolay Borisov пишет:
>
>
> On 22.05.2018 23:05, ein wrote:
>> Hello devs,
>>
>> I tested BTRFS in production for about a month:
>>
>> 21:08:17 up 34 days, 2:21, 3 users, load average: 0.06, 0.02, 0.00
>>
>> Without power blackout, hardware failure, SSD's SMART is flawless
Certainly. My apologies for not including them before. As
described, the disks are reasonably balanced -- not as full as the
last time. As such, it might be enough that balance would (slowly)
free up enough chunks to get things going. And if I have to, I will
partially convert to single
On 2018年05月27日 10:06, Brad Templeton wrote:
> Thanks. These are all things which take substantial fractions of a
> day to try, unfortunately.
Normally I would suggest just using VM and several small disks (~10G),
along with fallocate (the fastest way to use space) to get a basic view
of the
Thanks. These are all things which take substantial fractions of a
day to try, unfortunately.Last time I ended up fixing it in a
fairly kluged way, which was to convert from raid-1 to single long
enough to get enough single blocks that when I converted back to
raid-1 they got distributed to
On 2018年05月27日 09:49, Brad Templeton wrote:
> That is what did not work last time.
>
> I say I think there can be a "fix" because I hope the goal of BTRFS
> raid is to be superior to traditional RAID. That if one replaces a
> drive, and asks to balance, it figures out what needs to be done to
That is what did not work last time.
I say I think there can be a "fix" because I hope the goal of BTRFS
raid is to be superior to traditional RAID. That if one replaces a
drive, and asks to balance, it figures out what needs to be done to
make that work. I understand that the current balance
On 2018年05月27日 09:27, Brad Templeton wrote:
> A few years ago, I encountered an issue (halfway between a bug and a
> problem) with attempting to grow a BTRFS 3 disk Raid 1 which was
> fairly full. The problem was that after replacing (by add/delete) a
> small drive with a larger one, there
A few years ago, I encountered an issue (halfway between a bug and a
problem) with attempting to grow a BTRFS 3 disk Raid 1 which was
fairly full. The problem was that after replacing (by add/delete) a
small drive with a larger one, there were now 2 full drives and one
new half-full one, and
On 2018年05月22日 20:14, David Sterba wrote:
> On Tue, May 22, 2018 at 04:43:47PM +0800, Qu Wenruo wrote:
>> Introduce a small helper, btrfs_mark_bg_unused(), to accquire needed
>> locks and add a block group to unused_bgs list.
>
> The helper is nice but hides that there's a reference taken on
On 2018年05月26日 22:06, Steve Leung wrote:
> On 05/20/2018 07:07 PM, Qu Wenruo wrote:
>>
>>
>> On 2018年05月21日 04:43, Steve Leung wrote:
>>> On 05/19/2018 07:02 PM, Qu Wenruo wrote:
On 2018年05月20日 07:40, Steve Leung wrote:
> On 05/17/2018 11:49 PM, Qu Wenruo wrote:
>> On
tree: https://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next.git
blk-iolatency
head: b62bb0da2afe1437b9d2d687ea1f509466fd3843
commit: 2caae39bf0a83094c506f98be6e339355544d103 [7/13] memcontrol: schedule
throttling if we are congested
config: x86_64-randconfig-s4-05270049 (attached
tree: https://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next.git
blk-iolatency
head: b62bb0da2afe1437b9d2d687ea1f509466fd3843
commit: 2caae39bf0a83094c506f98be6e339355544d103 [7/13] memcontrol: schedule
throttling if we are congested
config: x86_64-randconfig-s3-05270017 (attached
On 05/20/2018 07:07 PM, Qu Wenruo wrote:
On 2018年05月21日 04:43, Steve Leung wrote:
On 05/19/2018 07:02 PM, Qu Wenruo wrote:
On 2018年05月20日 07:40, Steve Leung wrote:
On 05/17/2018 11:49 PM, Qu Wenruo wrote:
On 2018年05月18日 13:23, Steve Leung wrote:
Hi list,
I've got 3-device raid1 btrfs
On 23.05.2018 18:58, Josef Bacik wrote:
> From: Josef Bacik
>
> Since we are waiting on all ordered extents at the start of the fsync()
> path we don't need to wait on any logged ordered extents, and we don't
> need to look up the checksums on the ordered extents as they will
>
15 matches
Mail list logo