Re: [akka-user] WeaklyUp - possibility of accepting returning nodes

2016-01-28 Thread Andrzej Dębski
In situation where you have 2 unreachable nodes even if weaklyUp=true new 
incarnation of old node will not join the cluster until other unreachable 
nodes are also down, but we could still accept any number of fully new (not 
new incarnations of nodes) and set them to weaklyUp. I think this is a bit 
asymetric situation and it would be nice if we would move new incarnation 
of old node to weaklyUp even if we some other nodes are in unreachable state

W dniu czwartek, 28 stycznia 2016 12:41:06 UTC+1 użytkownik Patrik Nordwall 
napisał:
>
>
>
> On Tue, Jan 26, 2016 at 4:00 PM, Andrzej Dębski  > wrote:
>
>> I encountered "issue" similar to one described in 
>> https://github.com/akka/akka/issues/18067 and after that I read 
>> https://github.com/akka/akka/issues/13584 which mentions issue 18067 
>> "Taking note of that we got a user request for something similar as this 
>> issue. Discussed in #18067 <https://github.com/akka/akka/issues/18067>
>> "
>>
>> I was wondering if it would be possible to enhance WeaklyUp functionality 
>> to also accept nodes returning with new incarnation of actor system because 
>> we are accepting "new" nodes anyway. Currently if there is no convergence 
>> nodes that are returning with new UUID are marked as DOWN but until 
>> problems with all unreachable nodes are resolved new node will send Join 
>> repeatedly issuing message about "New incarnation of existing member...". 
>> If WeaklyUp is enabled leader could also accept new version of the node so 
>> it would be treated the same as "new" node.
>>
>
> I'm not sure I understand the problem. Why would it join repeatedly? The 
> join is accepted also when there are unreachable. Perhaps you mean that the 
> old incarnation is not removed because of the unreachable and therefore the 
> new incarnation cannot join. That has nothing to do with the weakly up 
> feature.
>
> Regards,
> Patrik
>  
>
>>
>> If this behaviour would be acceptable I can open an issue and work on PR 
>> for it. 
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] WeaklyUp - possibility of accepting returning nodes

2016-01-26 Thread Andrzej Dębski
I encountered "issue" similar to one described in 
https://github.com/akka/akka/issues/18067 and after that I read 
https://github.com/akka/akka/issues/13584 which mentions issue 18067 
"Taking note of that we got a user request for something similar as this 
issue. Discussed in #18067 
"

I was wondering if it would be possible to enhance WeaklyUp functionality 
to also accept nodes returning with new incarnation of actor system because 
we are accepting "new" nodes anyway. Currently if there is no convergence 
nodes that are returning with new UUID are marked as DOWN but until 
problems with all unreachable nodes are resolved new node will send Join 
repeatedly issuing message about "New incarnation of existing member...". 
If WeaklyUp is enabled leader could also accept new version of the node so 
it would be treated the same as "new" node.

If this behaviour would be acceptable I can open an issue and work on PR 
for it. 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] DistributedPubSub - topic actor liveness race condition

2015-11-25 Thread Andrzej Dębski
I created an issue: https://github.com/akka/akka/issues/19017

As for the contributing I can try to come up with an implementation 
solution for this problem but I will have to think how to handle messages 
that come during the gap between sending message to kill topic actor and 
receiving confirmation from death watch. 

W dniu piątek, 20 listopada 2015 16:33:48 UTC+1 użytkownik Patrik Nordwall 
napisał:
>
> Ah, now I see what you mean. You are right. Thanks for reporting. Please 
> create 
> an issue <https://github.com/akka/akka/issues/new>. If you are interested 
> in contributing a pull request would be great.
>
> /Patrik
>
> On Fri, Nov 20, 2015 at 2:25 PM, Andrzej Dębski  > wrote:
>
>>  I think it just postpones the problem - you could have the same sequence 
>> of events: Subscribe to mediator in the same time as scheduler puts Prune 
>> message in topic actor mailbox, mediator doesn't see topic actor as dead so 
>> Subscribe message is forwarded, Prune is processed and context.stop(self) 
>> is invoked in topic actor stopping it immidietly, also in docs of 
>> context.stop(self) it says that "If this method is applied to the `self` 
>> reference from inside an Actor then that Actor is guaranteed to not process 
>> any further messages after this call;"
>>
>> W dniu piątek, 20 listopada 2015 13:48:37 UTC+1 użytkownik Patrik 
>> Nordwall napisał:
>>>
>>> The topic actors don’t stop themselves immediately. For efficiency 
>>> reasons they stay around for a while even when there is no subscribers. See 
>>> pruneDeadline in the code.
>>>
>>> Doesn’t that take care of this also?
>>>
>>> Cheers,
>>> Patrik
>>> ​
>>>
>>> On Fri, Nov 20, 2015 at 12:36 PM, Andrzej Dębski  
>>> wrote:
>>>
>>>>  I was looking through DistributedPubSub code and I was wondering about 
>>>> possible problem due to how topic/children actors are managed.
>>>>
>>>> When new subscriber arrive for given topic the child/topic actor is 
>>>> looked up using  context.child(encTopic), if actor exists message is 
>>>> forwarded, if not the child actor is created, registered in "registry" and 
>>>> then message is forwarded. Topic actors stop themselves when all their 
>>>> subscribers are either terminated (death watch by topic actor on 
>>>> subscriber) or they just unsubscribe themselves from the topic.
>>>>
>>>> Could there be a situation where
>>>>
>>>>1. Subscribe message is sent to mediator from SubscriberA
>>>>2. At the same time SubscriberB is terminated and this message 
>>>>arrives to topic actor.
>>>>3. Topic actor stops itself using context.stop(self)
>>>>4. At the same time parent/mediator actor performs lookup of child 
>>>>and decided that topic actor is alive and forwards him Subscribe message
>>>>5. Subscribe message goes to dead letters because topic actor is 
>>>>already dead, mediator does not inform subscriber that something went 
>>>> wrong 
>>>>and/or do not try to retry sending Subscribe message to new version of 
>>>>topic actor.
>>>>
>>>> According to 
>>>> http://stackoverflow.com/questions/23629159/get-or-create-child-akka-actor-and-ensure-liveness
>>>>  
>>>> the solution would be to send kind-of Terminated message to mediator so 
>>>> Subscribe and Terminated messages are processed serially, also watching 
>>>> out 
>>>> for the fact that stopping actor is async operation so even if we invoke 
>>>> context.stop(child) from context of parent there is no guarantee that path 
>>>> is immediately. Maybe the semantics of 
>>>> termination/death-watch/child-lookup 
>>>> are a bit different from general ones due to the fact that here we are 
>>>> discussing about actors that reside in the same JVM but I would like to 
>>>> know if this is genuine problem or I just missed something when reading 
>>>> the 
>>>> code. 
>>>>
>>>> -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ: 
>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>> >>>>>>>>>> Search the archives: 
>>>> https://groups.google.com/group/akka-user
>>>> --- 
>>

Re: [akka-user] DistributedPubSub - topic actor liveness race condition

2015-11-20 Thread Andrzej Dębski
 I think it just postpones the problem - you could have the same sequence 
of events: Subscribe to mediator in the same time as scheduler puts Prune 
message in topic actor mailbox, mediator doesn't see topic actor as dead so 
Subscribe message is forwarded, Prune is processed and context.stop(self) 
is invoked in topic actor stopping it immidietly, also in docs of 
context.stop(self) it says that "If this method is applied to the `self` 
reference from inside an Actor then that Actor is guaranteed to not process 
any further messages after this call;"

W dniu piątek, 20 listopada 2015 13:48:37 UTC+1 użytkownik Patrik Nordwall 
napisał:
>
> The topic actors don’t stop themselves immediately. For efficiency reasons 
> they stay around for a while even when there is no subscribers. See 
> pruneDeadline in the code.
>
> Doesn’t that take care of this also?
>
> Cheers,
> Patrik
> ​
>
> On Fri, Nov 20, 2015 at 12:36 PM, Andrzej Dębski  > wrote:
>
>>  I was looking through DistributedPubSub code and I was wondering about 
>> possible problem due to how topic/children actors are managed.
>>
>> When new subscriber arrive for given topic the child/topic actor is 
>> looked up using  context.child(encTopic), if actor exists message is 
>> forwarded, if not the child actor is created, registered in "registry" and 
>> then message is forwarded. Topic actors stop themselves when all their 
>> subscribers are either terminated (death watch by topic actor on 
>> subscriber) or they just unsubscribe themselves from the topic.
>>
>> Could there be a situation where
>>
>>1. Subscribe message is sent to mediator from SubscriberA
>>2. At the same time SubscriberB is terminated and this message 
>>arrives to topic actor.
>>3. Topic actor stops itself using context.stop(self)
>>4. At the same time parent/mediator actor performs lookup of child 
>>and decided that topic actor is alive and forwards him Subscribe message
>>5. Subscribe message goes to dead letters because topic actor is 
>>already dead, mediator does not inform subscriber that something went 
>> wrong 
>>and/or do not try to retry sending Subscribe message to new version of 
>>topic actor.
>>
>> According to 
>> http://stackoverflow.com/questions/23629159/get-or-create-child-akka-actor-and-ensure-liveness
>>  
>> the solution would be to send kind-of Terminated message to mediator so 
>> Subscribe and Terminated messages are processed serially, also watching out 
>> for the fact that stopping actor is async operation so even if we invoke 
>> context.stop(child) from context of parent there is no guarantee that path 
>> is immediately. Maybe the semantics of termination/death-watch/child-lookup 
>> are a bit different from general ones due to the fact that here we are 
>> discussing about actors that reside in the same JVM but I would like to 
>> know if this is genuine problem or I just missed something when reading the 
>> code. 
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> -  Reactive apps on the JVM
> Twitter: @patriknw
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] DistributedPubSub - topic actor liveness race condition

2015-11-20 Thread Andrzej Dębski
 I was looking through DistributedPubSub code and I was wondering about 
possible problem due to how topic/children actors are managed.

When new subscriber arrive for given topic the child/topic actor is looked 
up using  context.child(encTopic), if actor exists message is forwarded, if 
not the child actor is created, registered in "registry" and then message 
is forwarded. Topic actors stop themselves when all their subscribers are 
either terminated (death watch by topic actor on subscriber) or they just 
unsubscribe themselves from the topic.

Could there be a situation where

   1. Subscribe message is sent to mediator from SubscriberA
   2. At the same time SubscriberB is terminated and this message arrives 
   to topic actor.
   3. Topic actor stops itself using context.stop(self)
   4. At the same time parent/mediator actor performs lookup of child and 
   decided that topic actor is alive and forwards him Subscribe message
   5. Subscribe message goes to dead letters because topic actor is already 
   dead, mediator does not inform subscriber that something went wrong and/or 
   do not try to retry sending Subscribe message to new version of topic actor.

According to 
http://stackoverflow.com/questions/23629159/get-or-create-child-akka-actor-and-ensure-liveness
 
the solution would be to send kind-of Terminated message to mediator so 
Subscribe and Terminated messages are processed serially, also watching out 
for the fact that stopping actor is async operation so even if we invoke 
context.stop(child) from context of parent there is no guarantee that path 
is immediately. Maybe the semantics of termination/death-watch/child-lookup 
are a bit different from general ones due to the fact that here we are 
discussing about actors that reside in the same JVM but I would like to 
know if this is genuine problem or I just missed something when reading the 
code. 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] akka-cluster handling of network failures - no remote actors and no auto-down

2015-09-09 Thread Andrzej Dębski
Hello,

Currently I am in process of rewriting and unifying old JGroups based 
messaging mechanisms in an application to akka based solution.

For now akka-cluster will be used as a mechanism to know about currently 
available nodes by listening to MemberEvent and ReachabilityEvent, based on 
those events mechanism similar to DistributedPubSub will deliver messages 
to subscribed recipients from different nodes. The thing I am interested in 
is the current (2.3.13) state of network/partitions handling.

As far as I know (I am not an expert on JGroups) if there will be some kind 
of network partition/temporary node failure the nodes will happily work 
along and when everything goes back to normal nodes will reform single 
cluster and shared cluster state will be merged (in my situation it is 
trivial - application does not use this feature of JGroups). This state of 
affairs is OK for in my case - even if there is a partition between nodes 
(nodes A,B do not see nodes C,D) some of the operations can be performed 
without the need to contact nodes in other partition. Now I am interested 
in possible differences between JGroups and akka-cluster behaviour in 
failure scenarios.

If I assume that:


   1. I will have auto-down-unreachable-after=off
   2. I will not use remote deployment/remote death watch 
   3. the actors one one node know about others by using actorSelection 
   based on known path and node addresses from cluster events 

then can I assume that:

   1. no system will be put to quarantined state (no "system" messages)
   2. actors inside the partitions can still send messages to themselves 
   normally, if I will try to send message to actor ref from unreachable node 
   the message will simply be not delivered (and optionally logged as 
   dead-letter)
   3. when network partition heals up all members will form single cluster 
   by sending each other gossip messages and unmarking unreachable nodes
   4. in situation when some node is restarted ungracefully - remote actor 
   system does not use leave method - when new incarnation of old ActorSystem 
   connects old one will be automatically marked as down without the need of 
   issuing manual down command thanks 
   to https://github.com/akka/akka/issues/16726


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster sharding - possible bug with IdExtractor and ShardResolver message flow

2014-10-26 Thread Andrzej Dębski
Hey.

In my case some of the messages do not contain initially Id that is used by 
IdResolver (all of the messages that act as initialization messages). 
Before I introduced cluster-sharding to my project when repository actor 
received such message it would create new persistent actor with 
persistentId = UUID.randomUUID and then forward the message to new actor. 

When I was introducing cluster sharding I noticed that ShardRegion actor 
does what my repository actor so I wanted to simulate similar flow by 
returning enveloped message through IdResolver which contains original 
message w/o id + generated id. In ShardResolver I wanted to calculate shard 
identifier based on the id in the envelope. At least that was the plan 
after I read the docs. 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Cluster sharding - possible bug with IdExtractor and ShardResolver message flow

2014-10-26 Thread Andrzej Dębski
Hello

I wanted to ask here before posting an issue on GitHub. 

Some of the messages I am sending to my PersistentActors need to be wrapped 
in the envelope.

In the source code of the ClusterSharding it says that ShardResolver will 
receive messages that passed through IdResolver, and also IdResolver 
returns the message to send to the cluster sharded actor.

The problem is that based on my experiments and on the source code of the 
ClusterSharding module it seems that the message returned by IdResolver is 
not passed to ShardResolver because it receives original message (and not 
the one returned by the IdExtractor).

Relevant code:

receive method of ShardRegion:

def receive = {
> case Terminated(ref) ⇒ receiveTerminated(ref)
> case evt: ClusterDomainEvent ⇒ receiveClusterEvent(evt)
> case state: CurrentClusterState  ⇒ receiveClusterState(state)
> case msg: CoordinatorMessage ⇒ 
> receiveCoordinatorMessage(msg)
> case cmd: ShardRegionCommand ⇒ receiveCommand(cmd)
> case msg if idExtractor.isDefinedAt(msg) ⇒ deliverMessage(msg, 
> sender())
>   }.


deliver message receives original message, and the code of deliverMessage

def deliverMessage(msg: Any, snd: ActorRef): Unit = {
> val shard = shardResolver(msg)
> regionByShard.get(shard) match {
>   case Some(ref) if ref == self ⇒
> getShard(shard).tell(msg, snd)
>   case Some(ref) ⇒
> log.debug("Forwarding request for shard [{}] to [{}]", shard, ref)
> ref.tell(msg, snd)
>   case None if (shard == null || shard == "") ⇒
> log.warning("Shard must not be empty, dropping message [{}]", 
> msg.getClass.getName)
> context.system.deadLetters ! msg
>   case None ⇒
> if (!shardBuffers.contains(shard)) {
>   log.debug("Request shard [{}] home", shard)
>   coordinator.foreach(_ ! GetShardHome(shard))
> }
> if (totalBufferSize >= bufferSize) {
>   log.debug("Buffer is full, dropping message for shard [{}]", 
> shard)
>   context.system.deadLetters ! msg
> } else {
>   val buf = shardBuffers.getOrElse(shard, Vector.empty)
>   shardBuffers = shardBuffers.updated(shard, buf :+ ((msg, snd)))
> }
> }
>   }


shard resolver is invoked with original message.

My interpretation was that in some cases I can return different message 
through IdResolver and it will be used by ShardResolver to identify shard. 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-cluster-sharding: question about ShardResolver return values

2014-10-14 Thread Andrzej Dębski
Thanks, it looks like it works as I expected. 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-cluster-sharding: question about ShardResolver return values

2014-10-14 Thread Andrzej Dębski
Thanks for the answers.

For now I went with taking most significant bits of java UUID and computing 
modulus value  of it. I created the test to see how this scheme behaves in 
terms of distribution and for now it is good enough for me.

I have one more question. Assuming I will do something like in the docs 
(uuidValue % someNumber). someNumber determines the number of shards. This 
means that I can have up to someNumber nodes in my cluster - any node that 
will exceed someNumber will not be used because someNumber is too small. It 
is some limitation in terms of scalability. 

I am wondering if someone tried to use some other scheme that would allow 
some flexibility in terms of number of nodes - for example when we will 
have as many nodes as someNumber the scheme will somehow change itself to 
be aware of increased number of nodes?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-cluster-sharding: question about ShardResolver return values

2014-10-08 Thread Andrzej Dębski
Hey, thanks for the replay!

I will have to see then how rebalancing process will behave when I will 
have lots of persistent actors - I was just worried that large number of 
shards may cause long rebalancing times. For now I will go with unique 
shard id for every persistent actor.


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] akka-cluster-sharding: question about ShardResolver return values

2014-10-06 Thread Andrzej Dębski
Hello

I wanted to integrate akka-cluster-sharding with my akka-persistence 
application. I read the documentation and browsed through the code in 
activator but one thing is still a bit of a mystery to me.

Currently in my application each PersistentActor is identified by UUID 
value. All messages that those actors receive inherit custom trait 
DomainCommand 
which has method aggregateRootId: UUID. Based on this and on documentation 
my IdExtractor looks like this:

val domainCommandIdExtractor: ShardRegion.IdExtractor = {
case command: DomainCommand => (command.aggregateRootId.toString, 
command)
  }

Now what about ShardResolver. According to the documentation it says that:

Try to produce a uniform distribution, i.e. same amount of entries in each 
> shard. As a rule of thumb, the number of shards should be a factor ten 
> greater than the planned maximum number of cluster nodes.


And in the example there is (id % 12).toString.

I am wondering if it would be wrong to just return string representation of 
UUID - that way I will have unique number of entries per shard because 
ShardResolver 
should return unique strings per PersistentActor. 

If I understand the documentation correctly it would result in 1 actor per 
shard grouping (and if number of shards > number of nodes I will have 
multiple shards on one node) - the drawback (I think) is that using this 
scheme the large number of shards may cause problems during 
rebalancing/adding new actor when the number of shards is large. 

If the scheme of just returning UUID value is not very good idea (be it 
because of the reason I mentioned or the other one) should I use some 
number of first characters from UUID so it will result in grouping of many 
actors per shard? This way I will have 16, 256, ... possible return values 
from ShardResolver

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Apache Kafka as journal - retention times/PersistentView and partitions

2014-08-26 Thread Andrzej Dębski


> You're right. If you want to keep all data in Kafka without ever deleting 
> them, you'd need to add partitions dynamically (which is currently possible 
> with APIs that back the CLI). On the other hand, using Kafka this way is 
> the wrong approach IMO. If you really need to keep the full event history, 
> keep old events on HDFS or wherever and only the more recent ones in Kafka 
> (where a full replay must first read from HDFS and then from Kafka) or use 
> a journal plugin that is explicitly designed for long-term event storage. 
>

That was worrying me all the time with using Kafka in a situation where I 
would want to keep the events all the time (or at least unknown amount of 
time). The thing that seemed nice is that I would have journal/event store 
and pub-sub solution implemented in one technology - basically I want to go 
around current limitation of PersistentView. I wanted to use Kafka topic 
and replay all events from the topic to dynamically added read models in my 
cluster. Maybe in this situation I should stick to distributed 
publish-subscribe in cluster for current event-sending and Cassandra as 
journal/snapshot store. I did not read that much about Cassandra and the 
way it stores data so I do not know if reading all events would be easy.

The main reason why I developed the Kafka plugin was to integrate my Akka 
> applications in unified log processing architectures as descibed in Jay 
> Kreps' excellent article 
> <http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying>.
>  
> Also mentioned in this article is a snapshotting strategy that fits typical 
> retention times in Kafka.
>

Thanks for the link. 

The most interesting next Kafka plugin feature for me to develop is an HDFS 
> integration for long-term event storage (and full event history replay). 
> WDYT?
>

That would be interesting feature - certainly would make akka + Kafka 
combination viable for more use cases.

W dniu wtorek, 26 sierpnia 2014 19:44:05 UTC+2 użytkownik Martin Krasser 
napisał:
>
>  
> On 26.08.14 16:44, Andrzej Dębski wrote:
>  
> My mind must have filtered out the possibility of making snapshots using 
> Views - thanks. 
>
>  About partitions: I suspected as much. The only thing that I am 
> wondering now is: if it is possible to dynamically create partitions in 
> Kafka? AFAIK the number of partitions is set during topic creation (be it 
> programmatically using API or CLI tools) and there is CLI tool you can use 
> to modify existing topic: 
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.AddPartitionTool.
>  
> To keep the invariant  " PersistentActor is the only writer to a 
> partitioned journal topic" you would have to create those partitions 
> dynamically (usually you don't know up front how many PersistentActors your 
> system will have) on per-PersistentActor basis.
>  
>
> You're right. If you want to keep all data in Kafka without ever deleting 
> them, you'd need to add partitions dynamically (which is currently possible 
> with APIs that back the CLI). On the other hand, using Kafka this way is 
> the wrong approach IMO. If you really need to keep the full event history, 
> keep old events on HDFS or wherever and only the more recent ones in Kafka 
> (where a full replay must first read from HDFS and then from Kafka) or use 
> a journal plugin that is explicitly designed for long-term event storage. 
>
> The main reason why I developed the Kafka plugin was to integrate my Akka 
> applications in unified log processing architectures as descibed in Jay 
> Kreps' excellent article 
> <http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying>.
>  
> Also mentioned in this article is a snapshotting strategy that fits typical 
> retention times in Kafka.
>
>  
>  On the other hand maybe you are assuming that each actor is writing to 
> different topic 
>  
>
> yes, and the Kafka plugin is currently implemented that way.
>
>  - but I think this solution is not viable because information about 
> topics is limited by ZK and other factors: 
> http://grokbase.com/t/kafka/users/133v60ng6v/limit-on-number-of-kafka-topic
> .
>  
>
> A more in-depth discussion about these limitations is given at 
> http://www.quora.com/How-many-topics-can-be-created-in-Apache-Kafka with 
> a detailed comment from Jay. I'd say that if you designed your application 
> to run more than a few hundred persistent actors, then the Kafka plugin is 
> the probably wrong choice. I tend to design my applications to have only a 
> small number of persistent actor

Re: [akka-user] Apache Kafka as journal - retention times/PersistentView and partitions

2014-08-26 Thread Andrzej Dębski
My mind must have filtered out the possibility of making snapshots using 
Views - thanks.

About partitions: I suspected as much. The only thing that I am wondering 
now is: if it is possible to dynamically create partitions in Kafka? AFAIK 
the number of partitions is set during topic creation (be it 
programmatically using API or CLI tools) and there is CLI tool you can use 
to modify existing topic: 
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.AddPartitionTool.
 
To keep the invariant  " PersistentActor is the only writer to a 
partitioned journal topic" you would have to create those partitions 
dynamically (usually you don't know up front how many PersistentActors your 
system will have) on per-PersistentActor basis.

On the other hand maybe you are assuming that each actor is writing to 
different topic - but I think this solution is not viable because 
information about topics is limited by ZK and other 
factors: 
http://grokbase.com/t/kafka/users/133v60ng6v/limit-on-number-of-kafka-topic.

W dniu wtorek, 26 sierpnia 2014 15:28:47 UTC+2 użytkownik Martin Krasser 
napisał:
>
>  Hi Andrzej,
>
> On 26.08.14 09:15, Andrzej Dębski wrote:
>  
> Hello 
>
>  Lately I have been reading about a possibility of using Apache Kafka as 
> journal/snapshot store for akka-persistence. 
>
> I am aware of the plugin created by Martin Krasser: 
> https://github.com/krasserm/akka-persistence-kafka/ and also I read other 
> topic about Kafka as journal 
> https://groups.google.com/forum/#!searchin/akka-user/kakfka/akka-user/iIHmvC6bVrI/zeZJtW0_6FwJ
> .
>
>  In both sources I linked two ideas were presented:
>
>  1. Set log retention to 7 days, take snapshots every 3 days (example 
> values)
> 2. Set log retention to unlimited.
>
>  Here is the first question: in first case wouldn't it mean that 
> persistent views would receive skewed view of the PersistentActor state 
> (only events from 7 days) - is it really viable solution? As far as I know 
> PersistentView can only receive events - it can't receive snapshots from 
> corresponding PersistentActor (which is good in general case).
>  
>
> PersistentViews can create their own snapshots which are isolated from the 
> corresponding PersistentActor's snapshots.
>
>  
>  Second question (more directed to Martin): in the thread I linked you 
> wrote: 
>
>   I don't go into Kafka partitioning details here but it is possible to 
>> implement the journal driver in a way that both a single persistent actor's 
>> data are partitioned *and* kept in order
>>
>
>   I am very interested in this idea. AFAIK it is not yet implemented in 
> current plugin but I was wondering if you could share high level idea how 
> would you achieve that (one persistent actor, multiple partitions, ordering 
> ensured)?
>  
>
> The idea is to
>
> - first write events 1 to n to partition 1
> - then write events n+1 to 2n to partition 2
> - then write events 2n+1 to 3n to partition 3
> - ... and so on
>
> This works because a PersistentActor is the only writer to a partitioned 
> journal topic. During replay, you first replay partition 1, then partition 
> 2 and so on. This should be rather easy to implement in the Kafka journal, 
> just didn't have time so far; pull requests are welcome :) Btw, the Cassandra 
> journal <https://github.com/krasserm/akka-persistence-cassandra> follows 
> the very same strategy for scaling with data volume (by using different 
> partition keys). 
>
> Cheers,
> Martin
>
>  -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
> -- 
> Martin Krasser
>
> blog:http://krasserm.blogspot.com
> code:http://github.com/krasserm
> twitter: http://twitter.com/mrt1nz
>
> 

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Apache Kafka as journal - retention times/PersistentView and partitions

2014-08-26 Thread Andrzej Dębski
Hello

Lately I have been reading about a possibility of using Apache Kafka as 
journal/snapshot store for akka-persistence. 

I am aware of the plugin created by Martin 
Krasser: https://github.com/krasserm/akka-persistence-kafka/ and also I 
read other topic about Kafka as 
journal 
https://groups.google.com/forum/#!searchin/akka-user/kakfka/akka-user/iIHmvC6bVrI/zeZJtW0_6FwJ.

In both sources I linked two ideas were presented:

1. Set log retention to 7 days, take snapshots every 3 days (example values)
2. Set log retention to unlimited.

Here is the first question: in first case wouldn't it mean that persistent 
views would receive skewed view of the PersistentActor state (only events 
from 7 days) - is it really viable solution? As far as I know 
PersistentView can only receive events - it can't receive snapshots from 
corresponding PersistentActor (which is good in general case).

Second question (more directed to Martin): in the thread I linked you 
wrote: 

 I don't go into Kafka partitioning details here but it is possible to 
> implement the journal driver in a way that both a single persistent actor's 
> data are partitioned *and* kept in order
>

 I am very interested in this idea. AFAIK it is not yet implemented in 
current plugin but I was wondering if you could share high level idea how 
would you achieve that (one persistent actor, multiple partitions, ordering 
ensured)?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka LoggingReceive and stackable traits - strange behaviour

2014-07-28 Thread Andrzej Dębski
Thanks!

I was browsing through the code of the but my mind somehow did not register 
that if I will not use isDefinedAt there will not be any logging.

W dniu poniedziałek, 28 lipca 2014 13:21:50 UTC+2 użytkownik Martynas 
Mickevičius napisał:
>
> Hi Andrzej,
>
> the answer is in the code. :) Code says 
> <https://github.com/akka/akka/blob/b88c964bd4f089a5dbcbc9c679265aba0e2a3654/akka-actor/src/main/scala/akka/event/LoggingReceive.scala#L56>
>  
> that LogginReceive logs message not in the apply but in the isDefinedAt 
> function. Therefore you need to chain partial functions correctly using 
> orElse. So In your case you need to change the StackableActorTracing to 
> handle that.
>
> trait StackableActorTracing extends ActorStack {
>
>   var currentMessage: Traced[Any] = Traced("tracing turned off")
>
>   val receivePf: PartialFunction[Any, Unit] = {
> case msg @ Traced(inner) => {
>
>   currentMessage = msg
>   super.receive(inner)
> }
>   }
>
>   override def receive = receivePf.orElse(super.receive)
>
> }
>
>
>
> On Thu, Jul 24, 2014 at 4:58 PM, Andrzej Dębski  > wrote:
>
>> Hello
>>
>> "Inspired" by the presentation 
>> http://www.slideshare.net/EvanChan2/akka-inproductionpnw-scala2013 I 
>> wanted to introduce stackable traits pattern to my Akka application. For 
>> now I wanted to add logging of messages using LoggingReceive and basic 
>> tracing with akka-tracing.
>>
>> Following the presentation I created:
>>
>> trait ActorStack extends Actor {
>>   def wrappedReceive: Receive
>>
>>   def receive: Receive = {
>> case x => if (wrappedReceive.isDefinedAt(x)) wrappedReceive(x) else 
>> unhandled(x)
>>   }
>> }
>>
>> and two traits:
>>
>> trait LoggingReceiveActor extends ActorStack {
>>   override def receive: Receive = LoggingReceive {
>> case x => {
>>   println("+++" + x) // added for debugging 
>>   super.receive(x)
>> }
>>   }
>> }
>>
>> trait StackableActorTracing extends ActorStack with ActorTracing {
>>   var currentMessage: Traced[Any] = Traced("tracing turned off")
>>
>>   override def receive: Receive = {
>> case msg @ Traced(inner) => {
>>   trace.sample(msg, this.getClass.getSimpleName)
>>   currentMessage = msg
>>   super.receive(inner)
>> }
>>
>> case msg => {
>>   currentMessage = Traced("tracing turned off")
>>   super.receive(msg)
>> }
>>   }
>>
>>   def msg = {
>> currentMessage
>>   }
>> }
>>
>> also I created one utility trait that gathers those two traits:
>>
>> trait BaseActor extends ActorStack with ActorLogging
>>   with LoggingReceiveActor with StackableActorTracing
>>
>> Now when I take my example actor:
>>
>> class RotationService(rotationRepository: ActorRef) extends BaseActor 
>> with DefaultAskTimeout {
>>   implicit val dispatcher = context.dispatcher
>>
>>   def wrappedReceive: Receive = {
>> case createCommand @ CreateRotation() => {
>> ...
>> }
>>
>> When I execute unit test that sends *CreateRotation *message to 
>> *RotationService 
>> *actor I am getting:
>>
>> Testing started at 3:41 PM ...
>> Connected to the target VM, address: '127.0.0.1:51978', transport: 
>> 'socket'
>> 15:41:36.520 
>> [TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
>>  
>> INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
>> 15:41:36.565 
>> [TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
>>  
>> DEBUG akka.event.EventStream - logger log1-Slf4jLogger started
>> 15:41:36.584 
>> [TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
>>  
>> DEBUG akka.event.EventStream - Default Loggers started
>> 15:41:36.586 
>> [TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
>>  
>> DEBUG a.a.LocalActorRefProvider$SystemGuardian - now supervising 
>> Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/system/deadLetterListener#-162762200]
>> 15:41:36.588 
>> [TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
>>  
>> DEBUG akka.event.DeadLetterListener - started 
>> (akka.event.DeadLetterListener@2025eae4)
>> 15:41:36.627 
>> [TestActorSystem-9dc08bc090f2416b

[akka-user] Akka LoggingReceive and stackable traits - strange behaviour

2014-07-24 Thread Andrzej Dębski
Hello

"Inspired" by the 
presentation http://www.slideshare.net/EvanChan2/akka-inproductionpnw-scala2013 
I wanted to introduce stackable traits pattern to my Akka application. For 
now I wanted to add logging of messages using LoggingReceive and basic 
tracing with akka-tracing.

Following the presentation I created:

trait ActorStack extends Actor {
  def wrappedReceive: Receive

  def receive: Receive = {
case x => if (wrappedReceive.isDefinedAt(x)) wrappedReceive(x) else 
unhandled(x)
  }
}

and two traits:

trait LoggingReceiveActor extends ActorStack {
  override def receive: Receive = LoggingReceive {
case x => {
  println("+++" + x) // added for debugging 
  super.receive(x)
}
  }
}

trait StackableActorTracing extends ActorStack with ActorTracing {
  var currentMessage: Traced[Any] = Traced("tracing turned off")

  override def receive: Receive = {
case msg @ Traced(inner) => {
  trace.sample(msg, this.getClass.getSimpleName)
  currentMessage = msg
  super.receive(inner)
}

case msg => {
  currentMessage = Traced("tracing turned off")
  super.receive(msg)
}
  }

  def msg = {
currentMessage
  }
}

also I created one utility trait that gathers those two traits:

trait BaseActor extends ActorStack with ActorLogging
  with LoggingReceiveActor with StackableActorTracing

Now when I take my example actor:

class RotationService(rotationRepository: ActorRef) extends BaseActor with 
DefaultAskTimeout {
  implicit val dispatcher = context.dispatcher

  def wrappedReceive: Receive = {
case createCommand @ CreateRotation() => {
...
}

When I execute unit test that sends *CreateRotation *message to 
*RotationService 
*actor I am getting:

Testing started at 3:41 PM ...
Connected to the target VM, address: '127.0.0.1:51978', transport: 'socket'
15:41:36.520 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
 
INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
15:41:36.565 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
 
DEBUG akka.event.EventStream - logger log1-Slf4jLogger started
15:41:36.584 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
 
DEBUG akka.event.EventStream - Default Loggers started
15:41:36.586 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
 
DEBUG a.a.LocalActorRefProvider$SystemGuardian - now supervising 
Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/system/deadLetterListener#-162762200]
15:41:36.588 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-2]
 
DEBUG akka.event.DeadLetterListener - started 
(akka.event.DeadLetterListener@2025eae4)
15:41:36.627 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-5]
 
INFO  kamon.statsd.StatsDExtension - Starting the Kamon(StatsD) extension
15:41:36.652 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-5]
 
DEBUG a.a.LocalActorRefProvider$Guardian - now supervising 
Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/user/statsd-metrics-sender#-609019730]
15:41:36.688 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-4]
 
DEBUG a.a.LocalActorRefProvider$SystemGuardian - now supervising 
Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/system/IO-UDP-FF#221802495]
15:41:36.707 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-3]
 
DEBUG kamon.statsd.StatsDMetricsSender - started 
(kamon.statsd.StatsDMetricsSender@1f06d526)
15:41:36.747 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-4]
 
DEBUG a.a.LocalActorRefProvider$Guardian - now supervising 
Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/user/kamon-metrics-subscriptions#-992962374]
15:41:36.770 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-4]
 
DEBUG kamon.metrics.Subscriptions - started 
(kamon.metrics.Subscriptions@6680693)
15:41:36.778 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-4]
 
DEBUG akka.routing.RouterPoolActor - started 
(akka.routing.RouterPoolActor@66ee2542)
15:41:36.779 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-4]
 
DEBUG akka.routing.RouterPoolActor - now supervising 
Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/system/IO-UDP-FF/selectors/$a#-483988336]
15:41:36.787 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-3]
 
DEBUG akka.io.UdpManager - started (akka.io.UdpManager@4fb22e7b)
15:41:36.792 
[TestActorSystem-9dc08bc090f2416bbf369947fd64908b-akka.actor.default-dispatcher-3]
 
DEBUG akka.io.UdpManager - now supervising 
Actor[akka://TestActorSystem-9dc08bc090f2416bbf369947fd64908b/system/IO-UDP-FF/selectors#1030403184]
15:41:36.931 
[TestActorSystem-9dc08bc090f2416b