On 03/20/2016 12:24 PM, Martin Steigerwald wrote:
>> btrfs kworker thread uses up 100% of a Sandybridge core for minutes on
>> > random write into big file
>> > https://bugzilla.kernel.org/show_bug.cgi?id=90401
> I think I saw this up to kernel 4.3. I think I didn“t see this with 4.4
> anymore a
Hey Qu, all
On 07/15/2016 05:56 AM, Qu Wenruo wrote:
>
> The good news is, we have patch to slightly speedup the mount, by
> avoiding reading out unrelated tree blocks.
>
> In our test environment, it takes 15% less time to mount a fs filled
> with 16K files(2T used space).
>
> https://patchwor
Hey liubo,
thanks for the quick response.
On 02/18/2016 05:59 PM, Liu Bo wrote:
>> Apparently also with 4.4 there is some sort of blocking happening ...
>> > just at 38580:
> OK, what does 'sysrq-w' say?
The problem has not appeared again for some time. Do I need to catch it
right when it happen
On 02/14/2016 11:42 PM, Roman Mamedov wrote:
> FWIW I had a persistently repeating deadlock on 4.1 and 4.3, but
> after upgrade to 4.4 it no longer happens.
Apparently also with 4.4 there is some sort of blocking happening ...
just at 38580:
cut
[Wed Feb 17 16:43:48 2016] INFO: task
Hey btrfs-folks,
I did a bit of digging using "perf":
1)
* "perf stat -B -p 3933 sleep 60"
* "perf stat -e 'btrfs:*' -a sleep 60"
-> http://fpaste.org/320718/10016145/
2)
* perf record -e block:block_rq_issue -ag" for about 30 seconds:
-> http://fpaste.org/320719/51101751/raw/
3)
* pe
On 02/01/2016 09:52 PM, Chris Murphy wrote:
>> Would some sort of stracing or profiling of the process help to narrow
>> > down where the time is currently spent and why the balancing is only
>> > running single-threaded?
> This can't be straced. Someone a lot more knowledgeable than I am
> might
Hey Chris,
sorry for the late reply.
On 01/27/2016 10:53 PM, Chris Murphy wrote:
> I can't exactly reproduce this. I'm using +C qcow2 on Btrfs on one SSD
> to back the drives in the VM.
>
> 2x btrfs raid1 with files totalling 5G consistently takes ~1 minute
> [1] to balance (no filters)
>
>
Hey Chris,
On 01/28/2016 12:47 AM, Chris Murphy wrote:
> Might be a bug, but more likely might be a lack of optimization. If it
> eventually mounts without errors that's a pretty good plus. Lots of
> file systems can't handle power failures well at all.
So what and how should I go about profiling
Sorry for the late reply to this list regarding this topic
...
On 09/04/2015 01:04 PM, Duncan wrote:
> And of course, only with 4.1 (nominally 3.19 but there were initial
> problems) was raid6 mode fully code-complete and functional -- before
> that, runtime worked, it calculated and wrote the p
Hello Ducan,
thanks a million for taking the time an effort to explain all that.
I understand that all the devices must have been chunk-allocated for
btrfs to tell me all available "space" was used (read "allocated to data
chunks").
The filesystem is quite old already with kernels starting at 3.1
Hey Hugo,
thanks for the quick response.
On 09/02/2015 01:30 PM, Hugo Mills wrote:
> You had some data on the first 8 drives with 6 data+2 parity, then
> added four more. From that point on, you were adding block groups
> with 10 data+2 parity. At some point, the first 8 drives became
> full, an
Hello btrfs-enthusiasts,
I have a rather big btrfs RAID6 with currently 12 devices. It used to be
only 8 drives 4TB each, but I successfully added 4 more drives with 1TB
each at some point. What I am trying to find out, and that's my main
reason for posting this, is how to balance the data on the
12 matches
Mail list logo