Another way to do it is to support custom nl (similarly to how we do
「$*IN.nl-in = 0.chr」 now). Split may be an overkill.

On 2017-08-18 08:40:32, c...@zoffix.com wrote:
> On Fri, 18 Aug 2017 08:35:18 -0700, alex.jakime...@gmail.com wrote:
> > Most command line tools support zero-separated input and output (grep
> > -z, find -print0, perl -0, sort -z, xargs -0, sed -z).
> >
> > And while you can use .stdout.lines to work on things line-by-line,
> > doing the same thing with null-byte separators is significantly
> > harder.
> >
> > <jnthn> Anyway, it's pretty easy to write yourself
> > <jnthn> Something like
> > <jnthn> supply { my $buffer = ''; whenever $stdout { $buffer ~= $_;
> > while $buffer.index("\0") -> $idx { emit $buffer.substr(0, $idx);
> > $buffer .= substr($idx + 1); } LAST emit $buffer } }
> >
> > I agree that it is not too hard, but it should be built in.
> >
> > One could argue that it should be *easier* to do this than to work on
> > stuff line-by-line. People usually don't expect newlines in
> > filenames,
> > but it is legal and therefore any code that expects non-null
> > separated
> > paths is broken. Not sure if we should go so far in trying to get the
> > huffman coding right, but a built-in way to work with data like this
> > would be a great step.
>
>
> That'd only work for strings, while .split can also split on regexes.
> I'd say we defer this until Cat (lazy strings) is implemented and then
> do the full-featured .split and .comb on it.
>
> The exact same issue exists in IO::Handle, which currently implements
> it by slurping the entire file first.

Reply via email to