Hi Matt,

Am 29.04.2018 um 20:30 schrieb Matt Sicker:
> On 26 April 2018 at 14:38, Oliver Heger <oliver.he...@oliver-heger.de>
> wrote:
> 
>> Recently I had a closer look at streaming libraries like Akka Streams.
>> So I had the idea to model the problem in a similar way:
>>
> 
> I've used Akka Streams a bit in the past, and while it's pretty awesome, I
> feel as though taking this approach would at least require collaboration
> with an existing reactive streams library or a new Commons one. In that
> case, I wonder if it's worth competing with existing RS APIs like RxJava,
> Reactor, Vert.x, and Akka Streams. I'm not even sure if any RS people are
> active here at Commons as it is.

This is similar to my own line of thoughts. Developing an own streaming
lib is way too ambitious and also not needed as there are already some
really powerful libraries around. So there are the following options:
- Provide an API that resembles the core concepts of such libraries
without going too deep or providing similar flexibility. That would be
my first take and is probably also the route Stefan is following.
- Integrate with one of the existing libraries, e.g. by implementing
special transformers, sources, and/or sinks that handle compression or
extraction. Not sure whether this fits into commons as we currently do
not implement extensions for specific other libraries. It would probably
already be a longish discussion to choose one or more libraries to support.

> 
> 
>> An archive or deflate operation could be modeled by a flow from a source
>> via some filtering or modifying stages to a sink. The source and the
>> sink could both either refer to a directory or an archive file. In order
>> to create a new archive, the source would point to a directory and the
>> sink would represent the archive file to be created. To extract an
>> archive file, it would be the other way around. When both the source and
>> the sink point to an archive file, you have an update operation. So the
>> basic concepts can be combined in a natural way.
>>
> 
> This approach is interesting and could potentially be done without reactive
> streams. It would essentially be a similar looking API, but it just would
> be implemented differently.
Yes, my first option above.

> 
> 
>> There could be stages that filter for files or archive entries to select
>> the content of an archive file to be created or the files to be
>> extracted. Maybe it makes also sense to map on entries to manipulate
>> them somehow (rename?).
>>
> 
> Now you're really starting to describe a high level framework. While this
> sounds really neat, I feel like it might be beyond the scope of Commons and
> would warrant its own project at that point.
> 
> 
Perhaps. Maybe a stream-like API to move or copy files could fit into
[io]? Then [compress] could integrate with this. But I am aware that
this is also an ambitious project and goes beyond what [compress]
currently needs.

Oliver

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to