On 30/03/16 13:40, Tom Browder wrote:
On Tue, Mar 29, 2016 at 10:29 PM, Timo Paulssen <t...@wakelift.de> wrote:
On 03/30/2016 03:45 AM, Timo Paulssen wrote:
Could you try using $filename.IO.slurp.lines instead of $filename.IO.lines
and see if that makes things any faster?
...
Actually, the method on an IO::Handle is called "slurp-rest"; slurp would
only work with a filename instead.
- Timo
Timo, I'm trying to test a situation where I could process every line
as it is read in. The situation assumes the file is too large to
slurp into memory, thus the read of one line at a time. So is there
another way to do that? According to the docs "slurp-rest" gets all
the remaining file at one read.
Thanks,
Best regards,
-Tom
I was suggesting this mostly because we've recently discovered a very
severe performance problem with IO.lines. I'd like to know if that also
affects your benchmark and how big the saving might be for "moderately"
sized data.
timo@schmand ~/p/e/SDL2_raw-p6 (master)> time perl6 -e 'for
"heap-snapshot".IO.lines {}'
129.14user 0.87system 2:10.44elapsed 99%CPU (0avgtext+0avgdata
507580maxresident)k
timo@schmand ~/p/e/SDL2_raw-p6 (master)> time perl6 -e 'for
"heap-snapshot".IO.slurp.lines {}'
1.92user 0.14system 0:02.07elapsed 99%CPU (0avgtext+0avgdata
537940maxresident)k
timo@schmand ~/p/e/SDL2_raw-p6 (master)> time perl6 -e 'for
"heap-snapshot".IO.open.split("\n") {}'
192.04user 0.36system 3:12.70elapsed 99%CPU (0avgtext+0avgdata
1350204maxresident)k
Hope this clears up how I meant that :)
- Timo