OK Victor Klang:  would you please be so kind as to elaborate with some 
helpful details on your prior reply?  ;0)

1) My reading of the docs is that the '.buffer' API is necessary to handle 
the case where if the Source streams data faster than the Sink can handle 
it; then one can run into 'Out of Memory' errors on the Sink.  i.e. The 
(default) Akka Streams API does not assume any memory restrictions on the 
Sink.

Please let me know if I'm reading this correctly?

Otherwise, please explain what the (default) Akka Streams implementation 
does with bounding memory for a Stream of events running through it.

2) I found Kevin Weber's helpful blog about Akka Streams; but when I then 
Googled around to find out where the Subscriber.onNext(...) gets called; 
and I wasn't able to find a clear example of that; so I'm now guessing that 
call is buried somewhere inside of the implementation of the 
higher-level-abstraction Graph DSL APIs.

Please let me know if I'm reading this correctly? 

Otherwise, please just reply with a link to a GitHub repo to a focused 
source file which demonstrates correct usage of that API (i.e. would that 
have to somehow get called through some '.map' function on event-elements 
of a Stream, and what would that calling syntax even look like given a 
Graph specified with both broadcast and merge paths?)

3) My reading of the docs is that backpressure to the Source won't get 
triggered until N >= BUFFER_MAX elements specified in the 
Source.buffer(...) call.

Please let me know if I'm reading this correctly?

4) My reading of the docs is that Streams are of a uniform type; and that 
the number passed into the Source.buffer(...) call refers to the number of 
that type to be buffered, rather than just the number of Bytes.

Please let me know if I'm reading this correctly?  

Maybe I could also try this out with using a simple String sequence as the 
Source; and then varying the buffer to see what comes out to the Sink.

THANKS in advance for any details to advance my Newbie understanding of 
this!




On Wednesday, September 21, 2016 at 11:01:12 AM UTC-7, Dagny T wrote:
>
>
> Just wanted to check with folks if I had the correct implementation for 
> how to protect from blowing up memory when working with Akka Streams.
>
> I've merged a Lightbend Blog post's code, with the latest API changes for 
> Akka v2.4.9, and the latest documentation about buffered streams in the 
> v2.4.9 API Docs.
>
> However, none of those explain these questions I have.  Please see 
> question comments, regarding the code snippet below it!  THANKS in advance 
> for any insights!
>
> // TODO 3:  MODIFIED to calling buffered API within Graph mapping -- check 
> assumptions!
> //          - where INTERNAL Akka implementation calls onNext() to get next 
> BUFFERED batch,
> //            so you don't have to worry about it as a DEV?
> //          - NUMERIC bound of 10 refers to NUMBER of elements (of possibly 
> complex types) on a
> //            UNIFORM-ELEMENT-TYPED stream, rather than Bytes, right?
> //          - if source produces N < BUFFER_MAX elements; then those are 
> simply passed through the pipeline without
> //            waiting to accumulate BUFFER_MAX elements
> //
>
>
> inputSource.buffer(10, OverflowStrategy.dropHead) ~> f1 ~> ...
>
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to