(accidentally sent this privately only, now re-sending to the list)

Hello Christopher,

In the Perl 6 specification, there are plans for lazy and
memory-releasing ways to parse strings that are either too large to fit
into memory at once or that are generated lazily (like being streamed in
through the network or using "live" data sources). Sadly, none of those
features are implemented in either of our backends.

The simplest thing we have is the <cut> rule, which should instruct the
grammar engine to deallocate the parts of the input data that are before
the current cursor. Sadly, this is not going to help you much at this stage.

Another thing that will be unhelpful is that our lazy lists (such as the
ones you can generate with gather/take or what lines() will give you)
will keep all items from the very first to the last you've requested
around until the whole list becomes garbage and gets collected.

It would seem like you'll want to do a line-by-line iteration through
the data using not lines() but get() and manually parse the individual
lines; the grammar seems sufficiently simple for that to work.

Something that does surprise me is that your tests seem to imply that :p
for subparse doesn't work. I'll look into that, because I believe it
ought to be implemented already. Perhaps not properly hooked up, though.

Hope to help!
- Timo


Reply via email to