On Aug 21, 2017, at 9:57 AM, Paul Sandoz <paul.san...@oracle.com> wrote:
> 
> Hi,
> 
> My preference, naming-wise, is to stick with the current naming scheme and 
> refer to such operations as reduceLeft and reduceRight. We have used reduce 
> both for the seed accepting, and optional returning methods, and we don’t use 
> different names to distinguish those.

+1

> I don’t think we can rely on the sequential() trick, since as Tagir points 
> out it is global.

Yes, that's unfortunate for the present purpose.

The closest thing we have at present is forEachOrdered (as you imply).
The transform Stream.sorted looks like something close, but it can eagerly
discard the current ordering.  What I think we want, as an input to sequential 
reduce,
is something that (a) preserves the encounter order and (b) optionally 
transforms
it in a predictable way (no-op for reduceLeft, reverse for reduceRight).

So maybe there is something akin to Stream.sorted that wants to exist:
Stream.ordered(p), where p is some token that expresses at least the
null permutation (keep encounter order, but allow previous parallelism)
and the reverse permutation.

A complete, somewhat crazy generalization would be something like
Stream.ordered(LongBinaryOperator comparator), where the comparator
would decide where to place the stream elements based on an ordinal
assigned to encounter order.  I don't know if there are middle grounds
for less powerful distinctions that are useful, and are more powerful
than just "forward/reverse".

> My inclination would be to focus on reduceLeft, and spin off reduceRight into 
> a separate issue and in addition ponder support for a reverse operation, 
> since reduceRight can be built upon that.

Thanks.

> It’s possible to develop a reduceRight or reserve operation that does 
> something very similar to forEachOrdered when operating in parallel (the last 
> FJ task has to complete before given the ok for it’s direct sibling to the 
> left to proceed when ready). For good splitters like ArrayList this might 
> work well, for bad splitters like Iterator it will be terrible. It may be 
> possible in some cases to fuse reverse().toArray().

Yes, I think so.  Maybe the concept of "ordered()" is too weak; the thing
we are considering is really turning a stream into a data source with
two characteristics:  (a) it is ordered, and (b) it is random access.
If you have a random access data sink (toArray) then, if you can
determine data source indexes, you can then perform the actual
reordering when you store the result, instead of earlier.

So, here's a stream transform that gives those characteristics:

Stream<IndexedElement<T>> Stream<T>.indexed().

(Where IndexedElement<T> implements Map.Entry<Long,T>.
And can be pattern-matched, etc.)

The semantics is that the elements are associated with a compact
zero-based set of indexes (which may be either easy or hard to
compute).  The indexes are compatible with forEachOrdered
(encounter order).  Those indexes can be used to store the
original stream's elements into any desired random-access pattern.

From there, we can get buffering into an array (or a consumer
function), which can redirect the values to a reducing collector
that takes the values in whatever order the user wants.

(The indexed-element guy is also good for numeric array processing.
It should eventually be a value type.)

OK, so I think an indexed stream returning pairs of (long,T) is
the answer.

— John

Reply via email to