Oh, and when the list of filenames is the simpler `"$*TMPDIR/RT132447.test"
xx 100`, the problem also appears, but it seemed to take many more
iterations to crash. That could be just chance, though.

On Sat, Nov 18, 2017 at 4:01 PM Dan Zwell <d...@zwell.net> wrote:

> Thanks for taking the time to look into this! I can't reproduce it with
> that snippet, even if I make the file nonempty. But I can reproduce it with
> the following two snippets. (I could not reproduce when I populated the
> input file in the same script that does the await loop.)
>
> perl6 -e '"$*TMPDIR/RT132447.test".IO.spurt: "a" x 400_000;'
> while perl6 --ll-exception -e 'my @files = gather { for ^100 { take
> "$*TMPDIR/RT132447.test" } }; await do for @files { start .IO.slurp }'; do
> echo iteration done; done
>
> On Sat, Nov 18, 2017 at 3:28 AM Zoffix Znet via RT <
> perl6-bugs-follo...@perl.org> wrote:
>
>> On Fri, 17 Nov 2017 09:26:10 -0800, d...@zwell.net wrote:
>> > After more careful checking, I found the bug fix did make it into the
>> > October release. A bisect showed it was fixed it in
>> > commit 6af44f8d38a02bbd0d68cfd014165d6e33e4d89a.
>> > [...]
>> > Slurping on the file handle you described still produces the
>> > exception: open($file-path).slurp: :close;
>> > but slurping from a filehandle without :close, followed by $fh.close,
>> is fine.
>>
>> Great. The behaviour you describe matches the commit; without :close the
>> buggy path is not taken.
>>
>>
>> > I found I can reproduce the error within 1-4 iterations when running
>> > the
>> > script on the Rakudo tree (built with --gen-moar and --gen-nqp and
>> > installed without a prefix to ./install/).
>>
>> I tried writing the test to cover this bug, but out of dozens of things I
>> tried (including trying your original script on rakudo's dir), I just can't
>> reproduce the failure.
>>
>> Does this crash on your computer when using commits before the fix, by
>> any chance?
>>
>>     given $*TMPDIR.add: 'RT132447.test' { $^p.spurt: ''; await (start
>> $p.slurp) xx 40_000 };
>>
>>
>>

Reply via email to