Re: [akka-user] Inspectable Inboxes? Resolving a large state processing problem.

2016-05-10 Thread atomly
Not strictly akka, but the design seems relatively straightforward. Use
some form of event sourcing or message queue-- Kafka seems popular for this
type of thing-- to submit/coordinate processing. Give every ten minute
iteration a unique id. When you process an item in that iteration, note
that somewhere (hazelcast, memcached, redis, etc). If an item is already
marked as processed in this iteration, move on.

atomly

On Tue, May 10, 2016 at 3:41 PM, kraythe  wrote:

> Ok, I am sorry, I can't be precise. NDAs and so on. Let me try to bullet
> point it.
>
> 1) Up to 10 million objects are in system.
> 2) At any time some of those 10 million can be ready to change state due
> to events in real life outside my control.
> 3) We check for eligible objects ever 10 minutes and we never know how
> many are to be processed at any time.
> 4) When we check we examine which objects are eligible to change state and
> of those, which have already been submitted (we track the submitted ones in
> a distributed map)
> 5) When objects are to change state I submit them for processing. I am
> currently adding them to a map to track they were submitted.
> 6) The process repeats.
>
> Constraints:
>
> 1) I can't slow the time between checks.
> 2) it would overload the system if I were to submit millions of objects
> twice.
> 3) I can't guarantee a node doesn't die during processing.
>
> Desire:
>
> To do this entire process without using a replicated map to note which
> objects have been submitted.
>
> I will look at the article on event sourcing but from my glance it seems
> to be what i am sort of doing now. However having to maintain this map
> seems like a very un-akka solution to the problem and I can't help thinking
> there is a pattern here that is escaping me.
>
> -- Robert
>
> On Tuesday, May 10, 2016 at 4:55:20 AM UTC-5, Akka Team wrote:
>>
>> I agree with Michael, I don't really understand what is actually needed
>> here, it is too big wall of text :) But from what I understand, I second
>> Michael's assesment is that you probably are after eventsourcing and
>> akka-persistence.
>>
>>
>> -Endre
>>
>> On Mon, May 9, 2016 at 7:28 PM, Michael Frank 
>> wrote:
>>
>>> It would help if your problem statement was more concrete.  however, in
>>> my vague understanding of the problem, it seems like event sourcing would
>>> be an appropriate way to model your business logic:
>>> http://martinfowler.com/eaaDev/EventSourcing.html
>>>
>>> if that is the case, then i think using akka persistence and cluster
>>> sharding would be a good starting point.  your 'state changes' sound like
>>> Persistent Actors.  the problem of knowing whether messages have been
>>> processed or not sounds like it would be solved with persistent actor
>>> recovery (
>>> http://doc.akka.io/docs/akka/2.4.4/scala/persistence.html#Recovery).
>>>
>>> -Michael
>>>
>>>
>>> On 05/09/16 09:56, kraythe wrote:
>>>
>>> Thats the thing, if there were humans with inboxes I could have a staff
>>> call them on the phone and check. :) Reprocessing the messages is a pretty
>>> simple solution IF the messages were small in number. When you get to the
>>> point where there are literally millions of events the problem gets a bit
>>> more difficult to manage. If there are 10 million messages to process and
>>> the messages could take 10 minutes to process, if I check again 1 minute
>>> later and 8 million of the records still show unprocessed and then I add
>>> those 8 million back to the queue, now I have 16 million more messages to
>>> process. Then the next phase, 6 million, added to the queue -2 million
>>> processed, the is now 20 million messages and so on. by the time I am done
>>> with the original set, Ill have another 30 million messages to process, all
>>> of which are a waste of computing power because they do nothing. Clearly
>>> that I would like to avoid. Also setting the time to be for sure how long
>>> we need to process the first 10 million is not an option because the time
>>> and the number of messages are both variables that are unknown.
>>>
>>> Right now I put the messages that need to be processed in a map with a
>>> key and the process that runs every minute checks for messages not
>>> processed. Then it compares those ids against those in the map, if they are
>>> in the map it doesn't resubmit them. However, this doesn't seem to be a
>>> very Akkaesque solution to the problem. I am looking for ideas on how to
>>> handle it without using the map but it may be that I have to continue using
>>> the map to load the message queues.
>>>
>>>
>>> On Monday, May 9, 2016 at 2:33:54 AM UTC-5, √ wrote:

 I'm quite sure that inspecting the mbox will be costlier than
 reprocessing at those sizes.

 Come up with two different solutions that you could perform between
 humans having mailboxes. Pick the best of those.

 --
 Cheers,
 √
 On May 8, 2016 5:15 PM, "kraythe" 

Re: [akka-user] Re: DI and Testing Streams

2016-05-10 Thread Richard Rodseth
Not really, but thanks. My biggest challenge is that I have a composite
Source derived from multiple Slick 3 Sources.

eg.
val compositeSource = channelSource.flapMapConcat(ch => intervalSource(ch) )

For the moment I have a mockable trait with methods to construct those
sources.


On Tue, May 10, 2016 at 2:33 AM, Akka Team  wrote:

> Hi Richard,
>
> In streams, the best way to inject dependencies is to make them separate
> stages/modules, where applicable. For example you can model a service that
> takes requests and provides some responses as a Flow. Now you can freely
> use it in tests with various test Sources/Sinks or probes. However, if you
> want to be able to stub out some underlying service this Flow depends on
> (maybe HTTP), you can still do it in various ways:
>
>  - if the low level service is something like A => Future[B] then you can
> just take it as a parameter when constructing the Flow
>  - if the low level service can be modeled as a Flow, then you can model
> your high-level service as a BidiFlow. in this case you can do something
> like:
>
> val lowLevelServiceStub: Flow[LowA, LowB] = ... //stub it
> val highLevelService: BidiFlow[A, LowA, LowB, B] = ... // service under
> test
> val testService: Flow[A, B] = highLevelService join lowLevelServiceStub
>
> I am not sure this is completely what you asked for though.
>
> -Endre
>
> On Fri, May 6, 2016 at 5:51 PM, Richard Rodseth 
> wrote:
>
>> I just re-read:
>>
>> http://techblog.realestate.com.au/to-kill-a-mockingtest/
>>
>> It's hard for me to see how to apply this thinking to a side effect like
>> alsoTo() , used for monitoring in my application.
>> On the input side, to make "pure" a function that creates a flow that's
>> built of multiple sources (using flatMapConcat) I could imagine function
>> parameters rather than a DbSources trait, and test code that provides
>> different functions (rather that stubbing the trait). But I'm not sure that
>>
>> case class DbSources(f1: A=>Source[B, NotUsed], f2: X=>Source[Y,NotUsed])
>>
>> is any better than stubable:
>>
>> trait DbSources {
>>   def f1(a:A):Source[B,NotUsed]
>>   def f2(x:X):Source[Y,NotUsed]
>> }
>>
>> Anyone else wrestled with the same?
>>
>> On Wed, May 4, 2016 at 10:03 PM, Richard Rodseth 
>> wrote:
>>
>>> I have some streams to test. Each one implements a particular "command".
>>> In recent days I have warmed up to using a combination of implicit
>>> parameters and constructor injection for DI.
>>>
>>>
>>> http://carefulescapades.blogspot.com/2012/05/using-implicit-parameters-for.html
>>>
>>> Some of my former singletons are now classes extending traits or
>>> abstract classes.
>>>
>>> But I have some remaining standalone functions (in Singletons) with
>>> rather long implicit parameter lists for their dependencies. eg:
>>>
>>> def command1(actualArgsForCommand...)(implicit ec: ExecutionContext, mat:
>>> ActorMaterializer, reader: SomeReader, writer: SomeWriter, monitoring:
>>> Monitoring):Future[SomeType]
>>>
>>> I can now mock SomeReader and SomeWriter, and use a test probe for the
>>> monitor actor (a dependency of MonitoringImpl). But the method signature is
>>> a bit ugly, even with the dependencies in their own parameter list.
>>>
>>> Is there some way to group the dependencies for a particular command in
>>> a case class or something while still having them "export" implicits, or
>>> would I have to redeclare within the method body?
>>> implicit val reader = groupedDeps.reader
>>> etc
>>>
>>> Or am I better off making a class per command (or related commands with
>>> the same dependencies), perhaps leaving the ec, and mat in method
>>> signatures, while converting the reader, writer and monitoring to class
>>> constructor arguments?
>>>
>>
>> --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Akka Team
> Typesafe - Reactive apps on the JVM
> Blog: letitcrash.com
> Twitter: @akkateam
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 

Re: [akka-user] Inspectable Inboxes? Resolving a large state processing problem.

2016-05-10 Thread kraythe
Ok, I am sorry, I can't be precise. NDAs and so on. Let me try to bullet 
point it. 

1) Up to 10 million objects are in system. 
2) At any time some of those 10 million can be ready to change state due to 
events in real life outside my control. 
3) We check for eligible objects ever 10 minutes and we never know how many 
are to be processed at any time.
4) When we check we examine which objects are eligible to change state and 
of those, which have already been submitted (we track the submitted ones in 
a distributed map)
5) When objects are to change state I submit them for processing. I am 
currently adding them to a map to track they were submitted. 
6) The process repeats. 

Constraints: 

1) I can't slow the time between checks.
2) it would overload the system if I were to submit millions of objects 
twice. 
3) I can't guarantee a node doesn't die during processing. 

Desire: 

To do this entire process without using a replicated map to note which 
objects have been submitted. 

I will look at the article on event sourcing but from my glance it seems to 
be what i am sort of doing now. However having to maintain this map seems 
like a very un-akka solution to the problem and I can't help thinking there 
is a pattern here that is escaping me. 

-- Robert

On Tuesday, May 10, 2016 at 4:55:20 AM UTC-5, Akka Team wrote:
>
> I agree with Michael, I don't really understand what is actually needed 
> here, it is too big wall of text :) But from what I understand, I second 
> Michael's assesment is that you probably are after eventsourcing and 
> akka-persistence. 
>
>
> -Endre
>
> On Mon, May 9, 2016 at 7:28 PM, Michael Frank  > wrote:
>
>> It would help if your problem statement was more concrete.  however, in 
>> my vague understanding of the problem, it seems like event sourcing would 
>> be an appropriate way to model your business logic: 
>> http://martinfowler.com/eaaDev/EventSourcing.html
>>
>> if that is the case, then i think using akka persistence and cluster 
>> sharding would be a good starting point.  your 'state changes' sound like 
>> Persistent Actors.  the problem of knowing whether messages have been 
>> processed or not sounds like it would be solved with persistent actor 
>> recovery (
>> http://doc.akka.io/docs/akka/2.4.4/scala/persistence.html#Recovery).
>>
>> -Michael
>>
>>
>> On 05/09/16 09:56, kraythe wrote:
>>
>> Thats the thing, if there were humans with inboxes I could have a staff 
>> call them on the phone and check. :) Reprocessing the messages is a pretty 
>> simple solution IF the messages were small in number. When you get to the 
>> point where there are literally millions of events the problem gets a bit 
>> more difficult to manage. If there are 10 million messages to process and 
>> the messages could take 10 minutes to process, if I check again 1 minute 
>> later and 8 million of the records still show unprocessed and then I add 
>> those 8 million back to the queue, now I have 16 million more messages to 
>> process. Then the next phase, 6 million, added to the queue -2 million 
>> processed, the is now 20 million messages and so on. by the time I am done 
>> with the original set, Ill have another 30 million messages to process, all 
>> of which are a waste of computing power because they do nothing. Clearly 
>> that I would like to avoid. Also setting the time to be for sure how long 
>> we need to process the first 10 million is not an option because the time 
>> and the number of messages are both variables that are unknown.  
>>
>> Right now I put the messages that need to be processed in a map with a 
>> key and the process that runs every minute checks for messages not 
>> processed. Then it compares those ids against those in the map, if they are 
>> in the map it doesn't resubmit them. However, this doesn't seem to be a 
>> very Akkaesque solution to the problem. I am looking for ideas on how to 
>> handle it without using the map but it may be that I have to continue using 
>> the map to load the message queues. 
>>
>>
>> On Monday, May 9, 2016 at 2:33:54 AM UTC-5, √ wrote: 
>>>
>>> I'm quite sure that inspecting the mbox will be costlier than 
>>> reprocessing at those sizes.
>>>
>>> Come up with two different solutions that you could perform between 
>>> humans having mailboxes. Pick the best of those.
>>>
>>> -- 
>>> Cheers,
>>> √
>>> On May 8, 2016 5:15 PM, "kraythe"  wrote:
>>>
 I have a process that has to manage a large amount of data. I want to 
 make the process reactive but it has to process every data element. The 
 data is stored in Hazelcast in a map (which is backed by a database but 
 that detail is irrelevant) and the data is stageful. At each state change 
 something has to be done either to the data or related data. So if we go 
 from State A to State B we might have to do something to another object in 
 the process in a transactional manner. When the data is in state A 

Re: [akka-user] Please help: is it Akka SupervisorStrategy bug or configuration issue?

2016-05-10 Thread Patrik Nordwall
Isn't that because you have not thrown the exception until after the sleep?

In general, don't block actor execution since it will not be able to react
on any messages while it's blocked. Supervision is also based on messages.

/Patrik

On Tue, May 10, 2016 at 9:20 PM, Yan Pei  wrote:

> Below is a simple AKKA SupervisorStrategy example:
>
> The SupervisorStrategy catchs the exception thrown from ChildActor with
> short time sleep in ChildActor.java, but it will not catch it if the
> Thread.sleep() is too long like below. The message will not be delivered to
> ChildActor for the long time sleeping.
>
> public class ParentActor extends UntypedActor {
> private ActorRef r;
> private static SupervisorStrategy strategy = new OneForOneStrategy(10,
> Duration.create("10 second"), new Function() {
> @Override
> public Directive apply(final Throwable t) {
> if (t instanceof NullPointerException) {
> return restart();
> } else if (t instanceof IllegalArgumentException) {
> return escalate();
> } else {
> return stop();
> }
> }
> });
>
> @Override
> public SupervisorStrategy supervisorStrategy() {
> return strategy;
> }
>
> public ParentActor() {
> r =
> getContext().actorOf(FromConfig.getInstance().props(Props.create(ChildActor.class).withRouter(new
> RoundRobinPool(1))), "child-actor");
> }
>
> @Override
> public void onReceive(final Object msg) throws Exception {
> if (msg instanceof String) {
> r.tell(msg, getSender());
> }{
> unhandled(msg);
> }
>
> }
>
> }
>
> public class ChildActor extends UntypedActor {
>
> @Override
> public void onReceive(final Object msg)  {
> if(msg instanceof String) {
> try{
> Thread.sleep(10l);
> }catch(Exception e) {}
> throw new NullPointerException("testing");
> }
> }
> }
>
>
>
>
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 

Patrik Nordwall
Akka Tech Lead
Lightbend  -  Reactive apps on the JVM
Twitter: @patriknw

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Please help: is it Akka SupervisorStrategy bug or configuration issue?

2016-05-10 Thread Yan Pei
Thank you Patrik,

  The reason I put Thread.sleep() here is to simulate the real use case in 
which our KidActor needs to call a third party API and throw an Exception 
if something goes wrong(like unauthorized token). The API calling might 
take long time before the exception happen. 

  Could you please advise me what's the best practice in this case if I 
want ParentActor make decision according to what exceptions thrown from 
KidActor?

Best,
Yan



On Tuesday, May 10, 2016 at 2:38:03 PM UTC-5, Patrik Nordwall wrote:
>
> Isn't that because you have not thrown the exception until after the sleep?
>
> In general, don't block actor execution since it will not be able to react 
> on any messages while it's blocked. Supervision is also based on messages.
>
> /Patrik
>
> On Tue, May 10, 2016 at 9:20 PM, Yan Pei  
> wrote:
>
>> Below is a simple AKKA SupervisorStrategy example:
>>
>> The SupervisorStrategy catchs the exception thrown from ChildActor with 
>> short time sleep in ChildActor.java, but it will not catch it if the 
>> Thread.sleep() is too long like below. The message will not be delivered to 
>> ChildActor for the long time sleeping.
>>
>> public class ParentActor extends UntypedActor {
>> private ActorRef r;
>> private static SupervisorStrategy strategy = new OneForOneStrategy(10,
>> Duration.create("10 second"), new Function() {
>> @Override
>> public Directive apply(final Throwable t) {
>> if (t instanceof NullPointerException) {
>> return restart();
>> } else if (t instanceof IllegalArgumentException) {
>> return escalate();
>> } else {
>> return stop();
>> }
>> }
>> });
>>
>> @Override
>> public SupervisorStrategy supervisorStrategy() {
>> return strategy;
>> }
>>
>> public ParentActor() {
>> r = 
>> getContext().actorOf(FromConfig.getInstance().props(Props.create(ChildActor.class).withRouter(new
>>  
>> RoundRobinPool(1))), "child-actor");
>> }
>>
>> @Override
>> public void onReceive(final Object msg) throws Exception {
>> if (msg instanceof String) {
>> r.tell(msg, getSender());
>> }{
>> unhandled(msg);
>> }
>>
>> }
>>
>> }
>>
>> public class ChildActor extends UntypedActor {
>>
>> @Override
>> public void onReceive(final Object msg)  {
>> if(msg instanceof String) {
>> try{
>> Thread.sleep(10l);
>> }catch(Exception e) {}
>> throw new NullPointerException("testing");
>> }
>> }
>> }
>>
>>
>>
>>
>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Akka Tech Lead
> Lightbend  -  Reactive apps on the JVM
> Twitter: @patriknw
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Please help: is it Akka SupervisorStrategy bug or configuration issue?

2016-05-10 Thread Yan Pei
Below is a simple AKKA SupervisorStrategy example:

The SupervisorStrategy catchs the exception thrown from ChildActor with 
short time sleep in ChildActor.java, but it will not catch it if the 
Thread.sleep() is too long like below. The message will not be delivered to 
ChildActor for the long time sleeping.

public class ParentActor extends UntypedActor {
private ActorRef r;
private static SupervisorStrategy strategy = new OneForOneStrategy(10,
Duration.create("10 second"), new Function() {
@Override
public Directive apply(final Throwable t) {
if (t instanceof NullPointerException) {
return restart();
} else if (t instanceof IllegalArgumentException) {
return escalate();
} else {
return stop();
}
}
});

@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}

public ParentActor() {
r = 
getContext().actorOf(FromConfig.getInstance().props(Props.create(ChildActor.class).withRouter(new
 
RoundRobinPool(1))), "child-actor");
}

@Override
public void onReceive(final Object msg) throws Exception {
if (msg instanceof String) {
r.tell(msg, getSender());
}{
unhandled(msg);
}

}

}

public class ChildActor extends UntypedActor {

@Override
public void onReceive(final Object msg)  {
if(msg instanceof String) {
try{
Thread.sleep(10l);
}catch(Exception e) {}
throw new NullPointerException("testing");
}
}
}





-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Manually rebalance cluster shards

2016-05-10 Thread Zack Angelo
Is there a programmatic way to rebalance when using cluster sharding? 

Occasionally the cluster will reach a state where all the shards are 
running on a single node, and we'd like a way to manually trigger a 
rebalance using an API. 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to assemble a Streams sink from multiple FileIO sinks?

2016-05-10 Thread Tom Peck
That's awesome thanks!

Here's the final service:

val rawFileSink = FileIO.toFile(file)
val thumbnailFileSink = FileIO.toFile(getFile(path, Thumbnail))
val watermarkedFileSink = FileIO.toFile(getFile(path, Watermarked))

val graph = Sink.fromGraph(GraphDSL.create(rawFileSink, 
thumbnailFileSink, watermarkedFileSink)((_, _, _)) {
  implicit builder => (rawSink, thumbSink, waterSink) => {
val streamFan = builder.add(Broadcast[ByteString](2))
val byteArrayFan = builder.add(Broadcast[Array[Byte]](2))

streamFan.out(0) ~> rawSink
streamFan.out(1) ~> byteAccumulator ~> byteArrayFan.in

byteArrayFan.out(0) ~> processorFlow(Thumbnail) ~> thumbSink
byteArrayFan.out(1) ~> processorFlow(Watermarked) ~> waterSink.in

SinkShape(streamFan.in)
  }
})

graph.mapMaterializedValue[Future[Try[Done]]](fs => 
Future.sequence(Seq(fs._1, fs._2, fs._3)).map(f => Success(Done)))

After which, you can simply do this in the play controller:
Accumulator(theSink).map(Right.apply)

On Tuesday, 10 May 2016 09:17:24 UTC+1, Tom Peck wrote:
>
> I'm trying to integrate an akka streams based flow in to my Play 2.5 app. 
>  The idea is that you can stream in a photo, then have it written to disk 
> as the raw file, a thumbnailed version and a watermarked version.
>
>
> I managed to get this working using a graph something like this:
>
>
> val byteAccumulator = Flow[ByteString].fold(new 
> ByteStringBuilder())((builder, b) => {builder ++= b.toArray})
>
> .map(_.result().toArray)
>
>
> def toByteArray = Flow[ByteString].map(b => b.toArray)
>
>
> val graph = Flow.fromGraph(GraphDSL.create() {implicit builder =>
>
>   import GraphDSL.Implicits._
>
>   val streamFan = builder.add(Broadcast[ByteString](3))
>
>   val byteArrayFan = builder.add(Broadcast[Array[Byte]](2))
>
>   val output = builder.add(Flow[ByteString].map(x => Success(Done)))
>
>
>   val rawFileSink = FileIO.toFile(file)
>
>   val thumbnailFileSink = FileIO.toFile(getFile(path, Thumbnail))
>
>   val watermarkedFileSink = FileIO.toFile(getFile(path, Watermarked))
>
>
>   streamFan.out(0) ~> rawFileSink
>
>   streamFan.out(1) ~> byteAccumulator ~> byteArrayFan.in
>
>   streamFan.out(2) ~> output.in
>
>
>   byteArrayFan.out(0) ~> slowThumbnailProcessing ~> thumbnailFileSink
>
>   byteArrayFan.out(1) ~> slowWatermarkProcessing ~> watermarkedFileSink
>
>
>   FlowShape(streamFan.in, output.out)
>
> })
>
>
> graph
>
>   }
>
>
> Then I wire it in to my play controller using an accumulator like this:
>
>
> val sink = Sink.head[Try[Done]]
>
>
> val photoStorageParser = BodyParser { req =>
>
>  Accumulator(sink).through(graph).map(Right.apply)
>
> }
>
>
>
> The problem is that my two processed file sinks aren't completing and I'm 
> getting zero sizes for both processed files, but not the raw one.  My 
> theory is that the accumulator is only waiting on one of the outputs of my 
> fan out, so when the input stream completes and my byteAccumulator spits 
> out the complete file, by the time the processing is finished play has got 
> the materialized value from the output.
>
>
> So, my questions are:  
>
> Am I on the right track with this as far as my approach goes?
>
> What is the expected behaviour for running a graph like this?
>
> How can I bring all my sinks together to form one final sink?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Error when scheduling akka system with quartz scheduler

2016-05-10 Thread Amruta
I have scheduled a task that the actors carry out at a scheduled time(every 
5 minutes). When I deploy my app the first time it runs fine but on the 
second and consequent runs it gives the folllowing error 


09:22:00.006 [schedulerFactoryBean_Worker-2] ERROR (:) - Job 
DEFAULT.jobDetailFactoryBean threw an unhandled Exception: 

java.lang.IllegalStateException: cannot create children while terminating 
or terminated

at akka.actor.dungeon.Children$class.makeChild(Children.scala:266) 
~[akka-actor_2.11-2.4.4.jar:?]

at akka.actor.dungeon.Children$class.attachChild(Children.scala:46) 
~[akka-actor_2.11-2.4.4.jar:?]

at akka.actor.ActorCell.attachChild(ActorCell.scala:374) 
~[akka-actor_2.11-2.4.4.jar:?]

at akka.actor.ActorSystemImpl.actorOf(ActorSystem.scala:589) 
~[akka-actor_2.11-2.4.4.jar:?]

at com.cisco.collab.arachne.job.SchedulingJob.executeInternal(
SchedulingJob.java:35) ~[SchedulingJob.class:?]

at org.springframework.scheduling.quartz.QuartzJobBean.execute(
QuartzJobBean.java:75) 
~[spring-context-support-4.1.5.RELEASE.jar:4.1.5.RELEASE]

at org.quartz.core.JobRunShell.run(JobRunShell.java:202) 
~[quartz-2.2.2.jar:?]

at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(
SimpleThreadPool.java:573) ~[quartz-2.2.2.jar:?]

09:22:00.007 [schedulerFactoryBean_Worker-2] ERROR (:) - Job 
(DEFAULT.jobDetailFactoryBean threw an exception.

org.quartz.SchedulerException: Job threw an unhandled exception.

at org.quartz.core.JobRunShell.run(JobRunShell.java:213) 
~[quartz-2.2.2.jar:?]

at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(
SimpleThreadPool.java:573) ~[quartz-2.2.2.jar:?]

Caused by: java.lang.IllegalStateException: cannot create children while 
terminating or terminated

at akka.actor.dungeon.Children$class.makeChild(Children.scala:266) 
~[akka-actor_2.11-2.4.4.jar:?]

at akka.actor.dungeon.Children$class.attachChild(Children.scala:46) 
~[akka-actor_2.11-2.4.4.jar:?]

at akka.actor.ActorCell.attachChild(ActorCell.scala:374) 
~[akka-actor_2.11-2.4.4.jar:?]

at akka.actor.ActorSystemImpl.actorOf(ActorSystem.scala:589) 
~[akka-actor_2.11-2.4.4.jar:?]

at com.cisco.collab.arachne.job.SchedulingJob.executeInternal(
SchedulingJob.java:35) ~[SchedulingJob.class:?]

at org.springframework.scheduling.quartz.QuartzJobBean.execute(
QuartzJobBean.java:75) 
~[spring-context-support-4.1.5.RELEASE.jar:4.1.5.RELEASE]

at org.quartz.core.JobRunShell.run(JobRunShell.java:202) 
~[quartz-2.2.2.jar:?]

... 1 more





I have used reaper  concept for shutdown strategy and I terminate the 
system after all children actors are done working.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Remoting Ports

2016-05-10 Thread enovo . soft
Hi,
I have two remote actor systems deployed on two different cloud platforms, 
one deployed on Google (port 80) and the other one on AWS (port 2552). I am 
managing AWS machine only,  Google machine is being managed by client.
I am instantiating connection from AWS actor to Google actor, sending a 
message and receiving results back. I want to block all unnecessary ports, 
so i added two rules to firewall
1. Allowed outbound connection to port 80  (to connect to remote actor)
2. Allowed incoming connection to port 2552 (to receive data back from 
remote actor)

Blocked all other ports, as soon as i block other ports, application stops 
connecting with remote actor. I have verified that actor system is 
listening at port 2552 (  [akka.tcp://Client@amd-machine:2552]  ).

It turns out that connection is using some dynamic ports for AWS deployed 
actor,  here are the logs of traffic captured using tcpdump command:

16:05:46.811270 IP 10.0.2.15.56756 > 102.150.24.190.80: Flags [S], seq 
1968780725, win 29200, options [mss 1460,sackOK,TS val 6205965 ecr 
0,nop,wscale 7], length 0
16:05:46.849800 IP 102.150.24.190.80 > 10.0.2.15.56756: Flags [S.], seq 
2602112001, ack 1968780726, win 65535, options [mss 1460], length 0
16:05:46.849841 IP 10.0.2.15.56756 > 102.150.24.190.80: Flags [.], ack 1, 
win 29200, length 0
16:05:46.850027 IP 10.0.2.15.56756 > 102.150.24.190.80: Flags [F.], seq 1, 
ack 1, win 29200, length 0
16:05:46.850164 IP 102.150.24.190.80 > 10.0.2.15.56756: Flags [.], ack 2, 
win 65535, length 0

where
102.150.24.190.80 is Google actor address 
10.0.2.15.56756 is AWS actor address 
as can be seen from above logs, that actor system is picking some dynamic 
port for remote communication. 

Could anybody please help me understand why dynamic port is being used when 
actor system is listening at port 2552, and how can i make sure dynamic 
port is not used?

Regards
Abud





-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to assemble a Streams sink from multiple FileIO sinks?

2016-05-10 Thread Akka Team
Hi Tom,

If I understood, your layout looks like, this (slightly simplified)

bytes --> broadcast --> map(x => Success(Done)) --> Sink.head
  |
  +> image
  |
  +> watermarked
  |
  +> thumbnail

The first issue that I see with the above layout is that the map stage will
emit a Success(Done) as the very first byte have passed through it.
However, since the graph is concurrent, this means that that byte might
have not reached any of the Sinks yet. I don't know what are the exact
semantics of Play here, if you return an early completed future while you
have not consumed the full body, so I don't know how that affects you.

What you really want is the futures returned by the different Sinks, since
only they can report when a file has been fully written, flushed and closed.

You will need to do something like this (pseudocode and simplified
somewhat):

val multiImageSink:
  Source[ByteString, (Future[Done], Future[Done], Future[Done])] =
  Flow.fromGraph(GraphDSL.create(imgSink, wtrmrkSink, thumbSink) {
implicit b => (img, watermark, thumb) =>
val bcast = ...

bcast ~> img
bcast ~> watermark
bcast ~> thumb

SourceShape(bcast.in)
})

The above code creates a Sink, which ready ByteStrings and materializes to
a Tuple3 of Futures that signal once the given file has been written. You
can combine these Futures into one to be able to plug it in Play probably.

-Endre

On Tue, May 10, 2016 at 10:14 AM, Tom Peck  wrote:

> I'm trying to integrate an akka streams based flow in to my Play 2.5 app.
> The idea is that you can stream in a photo, then have it written to disk as
> the raw file, a thumbnailed version and a watermarked version.
>
>
> I managed to get this working using a graph something like this:
>
>
> val byteAccumulator = Flow[ByteString].fold(new
> ByteStringBuilder())((builder, b) => {builder ++= b.toArray})
>
> .map(_.result().toArray)
>
>
> def toByteArray = Flow[ByteString].map(b => b.toArray)
>
>
> val graph = Flow.fromGraph(GraphDSL.create() {implicit builder =>
>
>   import GraphDSL.Implicits._
>
>   val streamFan = builder.add(Broadcast[ByteString](3))
>
>   val byteArrayFan = builder.add(Broadcast[Array[Byte]](2))
>
>   val output = builder.add(Flow[ByteString].map(x => Success(Done)))
>
>
>   val rawFileSink = FileIO.toFile(file)
>
>   val thumbnailFileSink = FileIO.toFile(getFile(path, Thumbnail))
>
>   val watermarkedFileSink = FileIO.toFile(getFile(path, Watermarked))
>
>
>   streamFan.out(0) ~> rawFileSink
>
>   streamFan.out(1) ~> byteAccumulator ~> byteArrayFan.in
>
>   streamFan.out(2) ~> output.in
>
>
>   byteArrayFan.out(0) ~> slowThumbnailProcessing ~> thumbnailFileSink
>
>   byteArrayFan.out(1) ~> slowWatermarkProcessing ~> watermarkedFileSink
>
>
>   FlowShape(streamFan.in, output.out)
>
> })
>
>
> graph
>
>   }
>
>
> Then I wire it in to my play controller using an accumulator like this:
>
>
> val sink = Sink.head[Try[Done]]
>
>
> val photoStorageParser = BodyParser { req =>
>
>  Accumulator(sink).through(graph).map(Right.apply)
>
> }
>
>
>
> The problem is that my two processed file sinks aren't completing and I'm
> getting zero sizes for both processed files, but not the raw one.  My
> theory is that the accumulator is only waiting on one of the outputs of my
> fan out, so when the input stream completes and my byteAccumulator spits
> out the complete file, by the time the processing is finished play has got
> the materialized value from the output.
>
>
> So, my questions are:
>
> Am I on the right track with this as far as my approach goes?
>
> What is the expected behaviour for running a graph like this?
>
> How can I bring all my sinks together to form one final sink?
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Akka Team
Typesafe - Reactive apps on the JVM
Blog: letitcrash.com
Twitter: @akkateam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User 

Re: [akka-user] Inspectable Inboxes? Resolving a large state processing problem.

2016-05-10 Thread Akka Team
I agree with Michael, I don't really understand what is actually needed
here, it is too big wall of text :) But from what I understand, I second
Michael's assesment is that you probably are after eventsourcing and
akka-persistence.


-Endre

On Mon, May 9, 2016 at 7:28 PM, Michael Frank 
wrote:

> It would help if your problem statement was more concrete.  however, in my
> vague understanding of the problem, it seems like event sourcing would be
> an appropriate way to model your business logic:
> http://martinfowler.com/eaaDev/EventSourcing.html
>
> if that is the case, then i think using akka persistence and cluster
> sharding would be a good starting point.  your 'state changes' sound like
> Persistent Actors.  the problem of knowing whether messages have been
> processed or not sounds like it would be solved with persistent actor
> recovery (
> http://doc.akka.io/docs/akka/2.4.4/scala/persistence.html#Recovery).
>
> -Michael
>
>
> On 05/09/16 09:56, kraythe wrote:
>
> Thats the thing, if there were humans with inboxes I could have a staff
> call them on the phone and check. :) Reprocessing the messages is a pretty
> simple solution IF the messages were small in number. When you get to the
> point where there are literally millions of events the problem gets a bit
> more difficult to manage. If there are 10 million messages to process and
> the messages could take 10 minutes to process, if I check again 1 minute
> later and 8 million of the records still show unprocessed and then I add
> those 8 million back to the queue, now I have 16 million more messages to
> process. Then the next phase, 6 million, added to the queue -2 million
> processed, the is now 20 million messages and so on. by the time I am done
> with the original set, Ill have another 30 million messages to process, all
> of which are a waste of computing power because they do nothing. Clearly
> that I would like to avoid. Also setting the time to be for sure how long
> we need to process the first 10 million is not an option because the time
> and the number of messages are both variables that are unknown.
>
> Right now I put the messages that need to be processed in a map with a key
> and the process that runs every minute checks for messages not processed.
> Then it compares those ids against those in the map, if they are in the map
> it doesn't resubmit them. However, this doesn't seem to be a very Akkaesque
> solution to the problem. I am looking for ideas on how to handle it without
> using the map but it may be that I have to continue using the map to load
> the message queues.
>
>
> On Monday, May 9, 2016 at 2:33:54 AM UTC-5, √ wrote:
>>
>> I'm quite sure that inspecting the mbox will be costlier than
>> reprocessing at those sizes.
>>
>> Come up with two different solutions that you could perform between
>> humans having mailboxes. Pick the best of those.
>>
>> --
>> Cheers,
>> √
>> On May 8, 2016 5:15 PM, "kraythe"  wrote:
>>
>>> I have a process that has to manage a large amount of data. I want to
>>> make the process reactive but it has to process every data element. The
>>> data is stored in Hazelcast in a map (which is backed by a database but
>>> that detail is irrelevant) and the data is stageful. At each state change
>>> something has to be done either to the data or related data. So if we go
>>> from State A to State B we might have to do something to another object in
>>> the process in a transactional manner. When the data is in state A a
>>> process finds the data and submits it to a map. Right now I have another
>>> thread reading from the map on intervals that are timed and if there is
>>> data in the map it processes the next entry in the map and so on.
>>>
>>> I would like to turn this process into an Akka actor process but the
>>> main stumbling block is to know what is already in the queue. Say I have 1m
>>> objects to process. At each interval the objects are checked if they can
>>> change state and if they can then they are put in the map to process. The
>>> problem is there could be a ton of these objects and they might take longer
>>> to process than the check interval. Furthermore, although it would not be
>>> damaging to the data it would be immensely wasteful to put them into the
>>> queue to process twice. Finally if the server crashed or something happened
>>> I would want to put them back into the queue if they are still in state A
>>> and should move to state B. Right now I can get the key set of the map and
>>> not submit them to the process if they are already in the map. If, instead,
>>> I change the system to Akka, then that ability changes. Whenever an object
>>> needs to change state, I would put it in a message inbox to an actor to
>>> process but I have no way to know what is already in that inbox so it makes
>>> the processing of the messages less durable. If a transaction fails or node
>>> fails I won't know that certain objects need to be processed 

Re: [akka-user] Source.queue vs Source.actorRef vs custom GraphStage

2016-05-10 Thread Akka Team
Hi Tim,

Feel free to experiment with these, I think these can be a good improvement
for Source.queue. Source.actorRef is somewhat different though, it does not
use a real ActorRef, it is a very special construct that makes a GraphStage
look like an actor from the outside, but in fact all messages go through
the actor that hosts the GraphStage (it can host more than one stage in
fact). I am not sure much improvement is possible there.

Anyway, PRs are welcome :)

-Endre


On Mon, May 9, 2016 at 6:23 PM, Tim Harper  wrote:

> Well, to counter this argument, I think this is a question that should be
> asked of Source.queue and Source.actorRef, also. Source.actorRef will
> buffer forever with its mailbox, and the overflow protection is applied as
> that mailbox is drained, when the Actor is gets an opportunity to run.
> Also, Source.buffer will, technically, buffer forever in the sense that
> every call to `offer` is queuing up a new asynchronous callback to be
> invoked.
>
> If we all agree that some form of unbounded buffering is ultimately
> inevitable (either with an Actor mailbox or a queued up callbacks), then
> the question should turn to: "how do we deal optimally with it when it
> happens?". And, I think the answer is that the process which applies the
> overflow strategy should perform as few allocation as possible.
>
> Tim
>
> On Monday, May 9, 2016 at 2:56:48 AM UTC-6, √ wrote:
>>
>> What happens if the queue is flooded before overflow protection kicks in?
>>
>> On Mon, May 9, 2016 at 10:19 AM, Tim Harper  wrote:
>>
>>> You're right. However, we could leverage the fact that the queue size
>>> only grows outside of the stage process. So, I think we could use a counter.
>>>
>>> On May 9, 2016, at 01:29, Viktor Klang  wrote:
>>>
>>> Isn't CLQ.size O(N)?
>>>
>>> --
>>> Cheers,
>>> √
>>> On May 9, 2016 7:38 AM, "Tim Harper"  wrote:
>>>
 I did some benchmarking to see how well `Source.actorRef` and
 `Source.queue` performed. Also, I wanted to see how much better I could do
 if I didn't need my Source to buffer (this stream is using conflate). It
 turns out, `Source.queue` is slower than `Source.actorRef`. And, the custom
 graph stage runs quite a bit faster than `Source.actorRef`:

 https://github.com/timcharper/as-actorref-repro/tree/ingestor-benchmark

 `Source.queue` is probably fine enough, but in reality, there's no
 reason it should perform worse than `Source.actorRef`. It could be sped up
 significantly while retaining the same functionality by following the same
 pattern of using the atomicBoolean to reduce scheduling callbacks. Two
 internal queues could be used, one queue a
 java.util.concurrent.ConcurrentLinkedQueue, the other queue a mutable,
 non-thread-safe queue. On each run of the stage callback, the
 OverflowStrategy could be applied quite safely. Since the
 java.util.concurrent.ConcurrentLinkedQueue will be only pushed to from
 async threads, a current total + the current size of the non-thread-safe
 mutable queue could be calculated to see if overflow had occurred, and then
 the strategy applied accordingly.

 Just wanted to post these thoughts somewhere while they were on my
 mind. I don't know if the optimization would be worth pursuing, just that a
 2-3x improvement in speed for Source.queue should be possible.

 Tim

 --
 >> Read the docs: http://akka.io/docs/
 >> Check the FAQ:
 http://doc.akka.io/docs/akka/current/additional/faq.html
 >> Search the archives:
 https://groups.google.com/group/akka-user
 ---
 You received this message because you are subscribed to the Google
 Groups "Akka User List" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.
 Visit this group at https://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>> --
>>> >> Read the docs: http://akka.io/docs/
>>> >> Check the FAQ:
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >> Search the archives:
>>> https://groups.google.com/group/akka-user
>>> ---
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "Akka User List" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/akka-user/Qi0CMY5-pjM/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>> --
>>> >> Read the 

Re: [akka-user] Akka-http : Load html page with dynamic values (location search property)

2016-05-10 Thread Akka Team
Hi Mayank,

You likely need a templating engine and you should pass parsed and
validated parameters to it. One option is swirl, which is the template
engine of Play (https://github.com/playframework/twirl), or there is
scalatags (https://github.com/lihaoyi/scalatags) which uses a DSL to
generate HTML compared to raw HTML.

-Endre

On Sat, May 7, 2016 at 5:26 PM, Mayank Bairagi  wrote:

> Hi there ,
> I have a use case. Here is an akka-http route.
>
> get {
>   pathPrefix("someRoute") {
> pathEndOrSingleSlash {
>   getFromResource("index.html")
> }
>   } }
>
> my index.html is as follows
> 
> 
> 
>   
>   alert('QUERY PARMS : '+window.location.search)
>   
> 
> 
>
> When I call index.html from browser
> file:///index.html?email=may...@knoldus.com
> Here I got alert
> *"*QUERY PARMS : ?email=may...@knoldus.com*"*
>
>
>
> *Is there any way in akka-http so that I can supply some dynamic query
> parameters while loading index.html?*
> Thanks in advance!!
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Akka Team
Typesafe - Reactive apps on the JVM
Blog: letitcrash.com
Twitter: @akkateam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Injecting messages on inactive stream

2016-05-10 Thread Akka Team
We have a "keepAlive" combinator for injecting idle messages.

-Endre

On Fri, May 6, 2016 at 7:21 PM, Konrad Malawski <
konrad.malaw...@lightbend.com> wrote:

> LOL, me too – totally forgot we have that one...! Of course we do.
> Happy hakking / Have a nice weekend :-)
>
> --
> Konrad `ktoso` Malawski
> Akka  @ Lightbend 
>
> On 6 May 2016 at 19:15:38, rklaehn (rkla...@gmail.com) wrote:
>
> Never mind. I guess I should stop coding and just enjoy the nice
> weather...
>
>
> http://doc.akka.io/docs/akka/2.4.4/scala/stream/stream-cookbook.html#injecting-keep-alive-messages-into-a-stream-of-bytestrings
>
> On Friday, May 6, 2016 at 7:08:56 PM UTC+2, rklaehn wrote:
>>
>> Hi all,
>>
>> I am currently porting some code using spray websockets to akka-http. I
>> need to inject a ping message into a stream to the client when no message
>> has been sent for some predefined duration. Of course I could just merge
>> with a regular stream of pings, but that would send unnecessary pings.
>>
>> Is there a predefined stage for something like this? I checked the Timer
>> driven stages section, but did not find anything immediately fitting. The
>> tricky thing is that the timer needs to start again every time a message
>> passes through. In the old, naked actor based code this was handled using a
>> simple setReceiveTimeout.
>>
>> Any idea,
>>
>> Rüdiger
>>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Akka Team
Typesafe - Reactive apps on the JVM
Blog: letitcrash.com
Twitter: @akkateam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Is UnboundedPriorityMailbox non blocking?

2016-05-10 Thread Edmondo Porcu
Hello,

The Akka documentation says that the UnboundedPriorityMailbox is non-blocking  
(http://doc.akka.io/docs/akka/snapshot/scala/mailboxes.html) and it uses a 
PriorityBlockingQueue, whose add method is blocking. Is that a bug in the 
documentation or is that mailbox really not blocking? If it is, is there an 
available non-blocking priority queue?

 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: [Akka-stream|http] - How to close a flow with killswitch from outside?

2016-05-10 Thread Akka Team
Hi,

I am not sure I fully understand what you try to achieve. Throttle only
stores one message, so there will be only one message outstanding when its
upstream is closed, It might take some time to emit that, that's true, but
this is what it supposed to do.

If you don't want a proper close/complete here (since that waits for any
previous element to be properly passed first to the last stage) but a
failure. Failures are propagated immediately, independently of any previous
message being in buffers/delayed etc.

But a more general question, why are you closing the receiving actor before
all the elements has been passed to it? Just because the client closed the
connection it does not mean that it does not want the previous messages to
be delivered. As an example:

1. I am in room, and type "Bye!"
2. I close the client

I expect "Bye!" to be still delivered

-Endre

On Fri, May 6, 2016 at 10:47 AM, Magnus Andersson <
magnus.anders...@mollyware.se> wrote:

> Hi
>
> You're right, I read the buffer as being the input buffer not the output
> buffer. As for the kill switch I haven't used it myself so I'm not much
> help there, I thought the underlying assumption was wrong.
>
> /Magnus
>
> tors 5 maj 2016 kl 09:56 skrev hamid :
>
>> Thank you for the reply.
>>
>> First, I want to use backpressure strategy to put a limit on incoming
>> messages which I believe that's what this line is for:
>>   .throttle(1, 1 seconds, 1, ThrottleMode.shaping)
>> I think this line causes a backpressure on client side so I'll not
>> receive no more messages than 1 per second. (Unless you tell me I'm wrong!)
>> I've tried "delay(1 seconds, backpressure)" as well, which also I think
>> is the same! (unless again you tell me i'm wrong!), and caused the same
>> effect.
>>
>> Secondly, By "1000mgs and a overflow strategy ", I think you are
>> referring to buffer of outgoing messages, which I've nothing to worry
>> about, unless again you give a me reason to! cause i've tried and sending
>> more than 1000s/buffer is not supported for backpressuring.
>>
>> Let me tell you what I'm experiencing here.
>> When a websocket starts I'll send the actor of websocket which is this loc
>>   val actorAsSource = builder.materializedValue.map(actor =>
>> UserJoined(user_id, rooms, actor))
>> to a sink.actorref to gather them, and connect the "fromWebsocket" to it
>> as well with a merge.
>>  val chatActorSink  = Sink.actorRef[ChatEvent](chatRoomActor,
>> UserLeft(user_id, rooms))
>>  val mergeToChat = builder.add(Merge[ChatEvent](2))
>>   actorAsSource ~> mergeToChat.in(0)
>>   fromWebsocket ~> mergeToChat.in(1)
>>   mergeToChat ~> stopper ~> chatActorSink // look at the stopper
>> here! ***
>>
>>
>>
>> What happens here, is kinda surprise to me, when I kill the websocket
>> client(a nodejs script) by (ctrl+c), the console(sbt running my scala app)
>> prints
>> [INFO] [05/04/2016 19:32:10.981] [akka-system-akka.actor.default-
>> dispatcher-8] 
>> [akka://akka-system/user/StreamSupervisor-0/flow-14-0-actorRefSource]
>> Message [io.scalac.akka.http.websockets.chat.ChatMessage] from
>> Actor[akka://akka-system/user/chat-53aff88cdc302896c722#71135517] to
>> Actor[akka://akka-system/user/StreamSupervisor-0/flow-14-0-actorRefSource#-676858346]
>> was not delivered. [6] dead letters encountered. This logging can be turned
>> off or adjusted with configuration settings 'akka.log-dead-letters' and
>> 'akka.log-dead-letters-during-shutdown'
>> What I think is happening here is this:
>> Since I'm using the "throttling" to limit the incoming messages, I think
>> it causes a delay also when it comes to closing stream, it wont happen
>> unless throttling finishes so it will bring a unnecessary delay but the
>> actor is dead and the messages it gets will be logged as a dead letter
>> messages.
>>
>> Btw, even if my assumptions are wrong I really like to know the answer of
>> "how can I use killswitch" in this matter to learn the akka-streams better.
>>
>> Thank you.
>>
>>
>> On Wednesday, May 4, 2016 at 11:39:28 AM UTC+4:30, Magnus Andersson wrote:
>>>
>>> I don't understand how you are to detect this behavior.
>>>
>>> You have a throttle for 1mgs/sec and a buffer of 1000mgs and a overflow
>>> strategy to drop messages. The messages will keep coming if you don't fill
>>> up the buffer, up to 1000 seconds after upstream has stopped sending
>>> messages. Is this what you are talking about?
>>>
>>>
>>> --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Akka User List" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/akka-user/OisR-nYVJXA/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email 

Re: [akka-user] `No elements passed in the last 1 minute` error in Http().superPool

2016-05-10 Thread Akka Team
Hi Durga,

Yes, the pool should fill up its connections even if some of them were
closed after a timeout. Which version of Akka are you using?

Can you also explain in more detail what is that you observe (i.e. the
symptom)?

-Endre

On Fri, May 6, 2016 at 9:29 AM, Durga Prasana 
wrote:

> Hi,
>
> we're getting `No elements passed in the last 1 minute` error in
> Http().superPool, on sending messages after some time; which is natural
> considering the way idle-timeout is configured, but should the pool perform
> an auto-reconnect on the first send after the period of in-activity ?
>
> Unfortunately, we couldn't repro this issue.
>
> Thoughts / Any guidance is appreciated.
>
> Thanks,
> Durga
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Akka Team
Typesafe - Reactive apps on the JVM
Blog: letitcrash.com
Twitter: @akkateam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Handling timeouts when consuming http services via pooled connections

2016-05-10 Thread Akka Team
Hi Chris,

I have a bit of hard time visualizing the exact setup you refer to, can you
create a small snippet that is roughly similar to your stream layout? It
might be something fixable but I need to see something more concrete.

-Endre

On Thu, May 5, 2016 at 9:26 PM, Chris Baxter  wrote:

> I realized recently that if I add a completionTimeout to a Flow setup to
> make http requests through a host connection pool that I can wedge the pool
> when enough (4 for the default config) of these timeouts happen.  I believe
> this is because the entity associated with the eventual response is not
> consumed or cancelled.  What's the proper way to handle this use case when
> wanting to add a timeout oriented combinator to the processing Flow that
> won't cause the pool to wedge.  Also, this may only happen when the
> response is chunked but I have to confirm that still.
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Akka Team
Typesafe - Reactive apps on the JVM
Blog: letitcrash.com
Twitter: @akkateam

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] How to assemble a Streams sink from multiple FileIO sinks?

2016-05-10 Thread Tom Peck


I'm trying to integrate an akka streams based flow in to my Play 2.5 app. 
 The idea is that you can stream in a photo, then have it written to disk 
as the raw file, a thumbnailed version and a watermarked version.


I managed to get this working using a graph something like this:


val byteAccumulator = Flow[ByteString].fold(new 
ByteStringBuilder())((builder, b) => {builder ++= b.toArray})

.map(_.result().toArray)


def toByteArray = Flow[ByteString].map(b => b.toArray)


val graph = Flow.fromGraph(GraphDSL.create() {implicit builder =>

  import GraphDSL.Implicits._

  val streamFan = builder.add(Broadcast[ByteString](3))

  val byteArrayFan = builder.add(Broadcast[Array[Byte]](2))

  val output = builder.add(Flow[ByteString].map(x => Success(Done)))


  val rawFileSink = FileIO.toFile(file)

  val thumbnailFileSink = FileIO.toFile(getFile(path, Thumbnail))

  val watermarkedFileSink = FileIO.toFile(getFile(path, Watermarked))


  streamFan.out(0) ~> rawFileSink

  streamFan.out(1) ~> byteAccumulator ~> byteArrayFan.in

  streamFan.out(2) ~> output.in


  byteArrayFan.out(0) ~> slowThumbnailProcessing ~> thumbnailFileSink

  byteArrayFan.out(1) ~> slowWatermarkProcessing ~> watermarkedFileSink


  FlowShape(streamFan.in, output.out)

})


graph

  }


Then I wire it in to my play controller using an accumulator like this:


val sink = Sink.head[Try[Done]]


val photoStorageParser = BodyParser { req =>

 Accumulator(sink).through(graph).map(Right.apply)

}



The problem is that my two processed file sinks aren't completing and I'm 
getting zero sizes for both processed files, but not the raw one.  My 
theory is that the accumulator is only waiting on one of the outputs of my 
fan out, so when the input stream completes and my byteAccumulator spits 
out the complete file, by the time the processing is finished play has got 
the materialized value from the output.


So, my questions are:  

Am I on the right track with this as far as my approach goes?

What is the expected behaviour for running a graph like this?

How can I bring all my sinks together to form one final sink?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Forwarding an Ask while intercepting the result

2016-05-10 Thread Ian Clegg
Hi Justin,

First off, many thanks for taking a look and the feedback. Requester looks 
very promising, i'm looking over the sources now. In some respects i'm 
lucky my problem is quite straightforward . Its clear that in the general 
case addressed by Requester, this is a not a trivial problem - but i'm very 
impressed with it so far

Many Thanks
Ian


On Monday, 9 May 2016 19:09:56 UTC+1, Justin du coeur wrote:
>
> Frankly, your code looks about right -- if this is a one-off situation, 
> it's likely a good enough approach.  You could also use pipeTo, but you'd 
> have to inject the originalSender into the communication, so I suspect it 
> would come out about equally complicated.
>
> I wouldn't normally talk up the Requester library 
> 
>  
> for a simple case like this, but since you asked, this code under Requester 
> would come out as roughly:
>
> case update: UpdateCommand => {
>   val targetActor = actors.getOrElse(update.id, throw new 
> RuntimeException("not available"))
>   for {
> result <- targetActor.request(update)
>   }
>   {
> sender ! result
> ... do the caching locally...
>   }
> }
>
>
> Basically, Requester was written for complex multi-Actor interactions -- 
> instead of exposing Future via ask(), it exposes RequestM via request().  
> RequestM intentionally looks a lot like Future but the completion handler 
> executes within the Actor's receive function, so you can safely work with 
> the Actor's state during the response.  It also automatically preserves the 
> original sender, since this sort of pattern is so common.  (Basically, it 
> is doing the ask and pipe under the hood, so your application-level code 
> doesn't have to worry about the dangerous Futures.)
>
> Mind, it's a third-party library, and still evolving, but I use it heavily 
> in production for Querki, and it's pretty stable by now.  And in the grand 
> scheme of things, it's not *terribly* complex...
>
> On Mon, May 9, 2016 at 9:48 AM, Ian Clegg  > wrote:
>
>> Hi everyone,
>>
>> I'm looking for some advice. I have an actor whose state needs to be 
>> updated via a HTTP request. I'm using Akka HTTP with the Route DSL and an 
>> Ask pattern (this seems to be 'normal')
>>
>> onComplete(*broker ?* *UpdateCommand*(id, "something")) {
>> case *Success*(u) => *complete*("updated")
>> case _=> *complete*("An error has occured")
>> }
>>
>> So an update message is sent to a broker as an 'Ask' so the result can be 
>> returned in the Http response. The broker locates the actor that needs to 
>> be updated based on the id passed in from the request and message. So the 
>> broker needs to forward the Ask on to the target, I know I can do this with 
>> a PipeTo - but I would also like the broker to receive the response so it 
>> can cache some data. So I guess i'm trying to Tee the response, sending it 
>> to myself and back to the original sender.
>>
>> This is what I have now and it feels nasty:
>>
>> def receive = {
>> case update: UpdateCommand =>
>>   val originalSender = sender  # capture the original sender of 
>> the Ask, since we need to reply in a future continuation
>>   val targetActor = actors.getOrElse(update.id, throw new 
>> RuntimeException("not available"))
>>   targetActor.ask(update) onSuccess {
>> case result => {
>>   originalSender ! result  # respond to the original ask
>>   self.tell(result, targetActor)  # tell it to ourself, but 
>> preseve the fact it came from endActor
>> }
>>   }
>>
>> I'm pretty new to Akka and these patterns. Is there a nice way to do 
>> this? Maybe using a pipeTo somehow, or some other combination of functions? 
>> Do other people do this kind of thing
>>
>> Any tips appreciated
>> Ian
>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, 

[akka-user] Re: How to send a message in a reactive stream from the Sink to the Source in a web socket connection

2016-05-10 Thread Flavio


Hello Frederico

I have seen your question/topic 
 and will post 
the code there (today). 

have fun!

Flavio

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] akka persistence AtLeastOnceDelivery SupervisorStratege, RedeliveryTick and ReceiveTimeout

2016-05-10 Thread Yan Pei
Still have the problem with Parent Actor's supervisorStratege couldn't 
catch the exception thrown by kid actor when the exception happend after 
long time processing in kid actor.

>From the Log, I can see
[INFO] [05/10/2016 02:03:52.301] [arachne-actor-system-akka.actor.default-
dispatcher-25] 
[akka://arachne-actor-system/user/user-main-actor/userSyncActor/$a/userpersist0/$a]
 
Message [akka.persistence.AtLeastOnceDelivery$Internal$RedeliveryTi
ck$] from Actor[akka://arachne-actor-system/deadLetters] to 
Actor[akka://arachne-actor-system/user/user-main-actor/userSyncActor/$a/userpersist0/$a#-1498951206]was
 
not delivered. [1] dead letters encountered.


Looks like the Parent Actor's SupervisorStratege will not monitor the kid 
actor's exception after certain time. 

How do I  configure to let Parent actor to have longer time for monitoring 
kid actor's exception.?

BTW, is this the siminar issue? https://github.com/akka/akka/issues/16629

Thanks,
Yan



-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.