On 07/14/2014 01:37 PM, Paul Sandoz wrote:
On Jul 14, 2014, at 12:57 PM, Remi Forax <fo...@univ-mlv.fr> wrote:

On 07/14/2014 12:51 PM, Paul Sandoz wrote:
On Jul 12, 2014, at 5:41 PM, Remi Forax <fo...@univ-mlv.fr> wrote:

I was not able to find the answer to my question in the archive,
why Stream.concat is not implemented like this ?

  @SafeVarargs
  public static <T> Stream<T> concat(Stream<T>... streams) {
    return Arrays.stream(streams).flatMap(Function.identity());
  }

Because the capabilities and characteristics of the streams are then lost e.g. 
in this case the splitting is governed by the number of streams passed in.
it seems to be a limitation of flatMap in that case, no ?

That would be much harder to optimise since each element gets mapped to a Stream of 0 or 
more elements when the pipeline is executed. The operation has no "global" view 
of all the streams to concatenate as all it knows is a mapping function. At the moment 
flatMap is quite a simple and efficient stateless operation and i think it best it stays 
that way.

e.g. imagine the case of streaming over the lines of a file and flatMapping 
each line to one or more words.

Paul.


and imagine the case where you know the size of a stream returned by flatMap, in that case,
you may want to split before pumping the values of the stream.

Rémi



Reply via email to