No worries! =)

Let me clarify the relation between Transducers and function composition.

The main component in the framework are so-called ReducingFunctions, which are the operations you want to perform. They are functions that take two arguments, an 'intermediate value' and a 'current element', and map them to a new intermediate value, i.e.,
rf : A x I -> A.
In the example, #nextPut: is a reducing function, since it takes a stream and an element to put to the stream (I assume #nextPut: returns the stream itself).

Basic operations like mapping, filtering, partitioning etc. are generic
and independent of streams/collections/whatsoever. Hence, they should be
resuable. This can be achieved by Transducers which are objects that take
a reducing function and transform it to incorporate the additional
functionality, e.g., mapping. The transducers signature is similar to
xf : (A x I -> A) -> (A x I -> A).
The classic approach adds these basic operations by wrapping the data (collections/streams). In contrast, transducers add them to the operations.

Function composition of transducer objects chains multiple basic
operations and allows to attach them to a reducing function. In fact, the
implementation indeed uses function composition for this purpose. However,
its up to the context how to make use of these functions, e.g., via
#reduce:init:.

Feel free to ask if anything remains unclear! =)

Best, Steffen


Am .07.2018, 16:20 Uhr, schrieb <he...@mailbox.sk>:

Solutions to different problems.

I proposed a simple generic thing that only composes functions, allowing for transformation of block arg.

Transducers seem like streaming, data-flow specific, thing.

Maybe yours helps the original problem in the long run.

I just tried to find something to solve more specific part of it while being such generic that it helps in other places as well.

Just pointing it out so there isn't a perception they are competing to solve same problem and only one should be selected.

Herby

On July 3, 2018 3:57:21 PM GMT+02:00, "Steffen Märcker" <merk...@web.de> wrote:
I think, streams and functional composition match up nicely and
transducers are a way to do this. I've introduced them earlier on this

list. (I hesitated to weight into the discussion, as I won't have time
to
work on the Pharo port of Transducers until October.)

Let me give a simplified example. I assume the basic messages are
#nextPut: and #close: to write to aStream and close it.

  aString
    transduce: LineEndCrLf flatMap
    reduce: (#nextPut: completing: #close)
    init: aStream

* Let aString be the source, i.e., some object that yields a sequence
of
characters:
  a CR b
* Let LineEndConventionLF a function that maps CR to #(CR LF):
  a CR b -> a #(CR LF) b
* #flatMap embeds #(CR LF) into the sequence:
  a CR LF b
* (#nextPut: completing: #close) puts each character on the stream and

calls #close at the end:
  aStream
    nextPut: $a;
    nextPut: CR;
    nextPut: LF;
    nextPut: $b;
    close;
    yourself.
* #transduce:reduce:init: actually starts the writing process.

First, (LineEndConventionLF flatMap) is composable with other
transformations, e.g., encoding. The example above would change to:

  aString
    transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
    reduce: (#nextPut: completing: #close)
    init: aByteStream

LineEndCrLf and EncodeUTF8 only have to know how to process single
characters. Hence, they are highly reusable.

Second, as the source the transformations, the writing process and the

data sink are distinct objects, we can freely interact with them and
build
arbitrary pipelines. It is straight-forward to come up with other
iteration methods than #reduce:init:, e.g., step-wise processing of
streams.

Best, Steffen

Reply via email to