Same, but with the new Telemetry module so you can see better what’s going on:
$ perl6 -MTelemetry -e 'for (1, 10, 100, 1_000, 10_000, 100_000) -> $limit {
snap; my $x; my $y = "x" x 100; $x ~= $y for 1..$limit }'
Telemetry Report of Process #79213 (2017-11-10T16:24:00Z)
Number of Snapshots: 7
No supervisor thread has been running
Initial Size: 83236 Kbytes
Total Time: 37.40 seconds
Total CPU Usage: 37.37 seconds
wallclock util% max-rss
1084 39.13 164
124 25.20 192
355 23.35 280
6912 21.83 1940
397311 12.48 83628
36996468 12.49 3993120
--------- ------ --------
37402254 12.49 4079324
Legend:
wallclock Number of microseconds elapsed
util% Percentage of CPU utilization (0..100%)
max-rss Maximum resident set size (in Kbytes)
So, going from 1_000 -> 10_000 -> 100_000 seems to have a quadratic effect on
memory and CPU usage (as shown in wallclock, with the 1 in 8 CPU’s being fully
in use all of the time).
Feels like a clear case of O² to me.
Liz
> On 10 Nov 2017, at 16:02, Daniel Green via RT <[email protected]>
> wrote:
>
> On Sun, 30 Jul 2017 09:49:39 -0700, ronaldxs wrote:
>> On Thu, 20 Jul 2017 20:32:12 -0700, [email protected] wrote:
>>> I did an experiment a while ago and found that string concatenation
>>> in a
>>> loop was Ο(n²) in rakudo. I asked about it on irc and jnthn explained
>>> to me
>>> that this was expected because strings are immutable (and therefore
>>> wasn't
>>> worth RTing):
>>> https://irclog.perlgeek.de/perl6/2015-12-15#i_11720228
>>>
>>
>> If the string is represented by an array and is immutable then each
>> append allocates a new array, copies in the existing prefix string and
>> copies the new appended string at the end, which is the sum of an
>> arithmetic sequence and so O(n ** 2). Other languages have immutable
>> strings including Java, JavaScript and Python, and each has found ways
>> around the problem. Perl 6 Str objects perform somewhat faster than
>> Java String objects (on my system – Java 8?) but Java has mutable
>> StringBuffer and StringBuilder classes. Python v2.7 cheats by making
>> strings mutable for the case where a string has a reference count of 1
>> (http://blog.mclemon.io/python-efficient-string-concatenation-in-
>> python-2016-edition and see the comment in source stringobject.c for
>> _PyString_Resize). For JavaScript FireFox/SpiderMonkey ropes seem to
>> solve the problem (https://stackoverflow.com/questions/7299010/why-is-
>> string-concatenation-faster-than-array-join) which is also solved in
>> other ways by V8 (Node JS) etc. As noted on IRC the Perl 6 native
>> “str” also has an optimization for this case
>> (https://irclog.perlgeek.de/perl6-dev/2017-07-27#i_14930258 – sorry
>> about any confusion on ropes and V8).
>>
>> The memory situation has improved since the ticket was open but at
>> 150_000 iterations or 15 million characters there is still a MoarVM
>> memory panic with ulimit of 4Gb.
>
>
> The numbers seem a little better now (though still non-linear):
> $ /usr/bin/time perl6 -e 'for (1, 10, 100, 1_000, 10_000, 100_000) -> $limit
> {my $x; my $y = "x" x 100; $x ~= $y for (1..$limit); say "$limit: ", now -
> ENTER now}'
> 1: 0.0045150
> 10: 0.0004916
> 100: 0.00096704
> 1000: 0.0077635
> 10000: 0.4149077
> 100000: 40.1284791
> 37.36user 3.66system 0:41.02elapsed 100%CPU (0avgtext+0avgdata
> 3776812maxresident)k
>
> $ perl6 --version
> This is Rakudo version 2017.10-146-gf8e1a5faa built on MoarVM version
> 2017.10-64-g2ca7e8587