I think, streams and functional composition match up nicely and
transducers are a way to do this. I've introduced them earlier on this
list. (I hesitated to weight into the discussion, as I won't have time to
work on the Pharo port of Transducers until October.)
Let me give a simplified example. I assume the basic messages are
#nextPut: and #close: to write to aStream and close it.
aString
transduce: LineEndCrLf flatMap
reduce: (#nextPut: completing: #close)
init: aStream
* Let aString be the source, i.e., some object that yields a sequence of
characters:
a CR b
* Let LineEndConventionLF a function that maps CR to #(CR LF):
a CR b -> a #(CR LF) b
* #flatMap embeds #(CR LF) into the sequence:
a CR LF b
* (#nextPut: completing: #close) puts each character on the stream and
calls #close at the end:
aStream
nextPut: $a;
nextPut: CR;
nextPut: LF;
nextPut: $b;
close;
yourself.
* #transduce:reduce:init: actually starts the writing process.
First, (LineEndConventionLF flatMap) is composable with other
transformations, e.g., encoding. The example above would change to:
aString
transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
reduce: (#nextPut: completing: #close)
init: aByteStream
LineEndCrLf and EncodeUTF8 only have to know how to process single
characters. Hence, they are highly reusable.
Second, as the source the transformations, the writing process and the
data sink are distinct objects, we can freely interact with them and build
arbitrary pipelines. It is straight-forward to come up with other
iteration methods than #reduce:init:, e.g., step-wise processing of
streams.
Best, Steffen