Re: [akka-user] Re: Inability to route through a proxy.

2015-01-23 Thread Tim St. Clair
Could you please elaborate on "However this has the limitation that all 
containers need to be running on the same host." , what is the contraint?  

Right now we are using kubernetes, and all inbound clients appear as if 
they are coming in on the same NAT. 

On Friday, January 23, 2015 at 10:35:51 AM UTC-6, Martynas Mickevičius 
wrote:
>
> Upcoming Akka 2.4 has a NAT (container) traversal support 
>  implemented.
>
> Until that you can run spark container with --net=host flag to start 
> container that uses host network interface. However this has the limitation 
> that all containers need to be running on the same host.
>
> On Thu, Jan 22, 2015 at 11:51 PM, jay vyas  > wrote:
>
>> To add some color.
>>
>> 1) When we run w/ -Dakka.remote.untrusted-mode=on, we see dropping 
>> message [class akka.actor.ActorSelectionMessage] for *unknown* recipient
>>
>> [Actor[akka.tcp://sparkMaster@10.254.230.67:7077/]] arriving at 
>> [akka.tcp://sparkMaster@10.254.230.67:7077] inbound addresses are 
>> [akka.tcp://sparkMaster@spark-master:7077]
>>
>> 2) When we run w/ -Dakka.remote.untrusted-mode=off, we see dropping 
>> message [class akka.actor.ActorSelectionMessage] for *non-local *recipient 
>>
>>
>> [Actor[akka.tcp://sparkMaster@10.254.118.158:7077/]] arriving at 
>> [akka.tcp://sparkMaster@10.254.118.158:7077] inbound addresses are 
>> [akka.tcp://sparkMaster@spark-master:7077]
>>
>> On Thursday, January 22, 2015 at 4:00:42 PM UTC-5, Tim St. Clair wrote:
>>>
>>> Greetings folks - 
>>>
>>> I'm currently trying to run Spark master through a proxy and receiving 
>>> an error that I can't seem to bypass. 
>>>
>>> ERROR EndpointWriter: dropping message [class 
>>> akka.actor.ActorSelectionMessage] 
>>> for non-local recipient [Actor[akka.tcp://sparkMaster@
>>> 10.254.118.158:7077/]] arriving at [akka.tcp://sparkMaster@10.
>>> 254.118.158:7077] inbound addresses are [akka.tcp://sparkMaster@spark-
>>> master:7077]
>>>
>>> The spark-master is running inside a container which is on a 192.168 
>>> subnet, but all traffic from the slaves are routed via iptables through a 
>>> load-balanced proxy 10.254.118.158.  
>>>
>>> Is there any easy was to disable what appears to be IP validation?  
>>>
>>> Cheers,
>>> Tim
>>>
>>  -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Martynas Mickevičius
> Typesafe  – Reactive 
>  Apps on the JVM
>  

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] TCP Outgoing Connection Lifecycle

2015-01-23 Thread colestanfield
Hey Endre,

Thanks for the response but I don't quite understand how to implement what 
you're suggesting. I did find the Fold class but it has no Promise/Future 
in it. I also found the FoldSink class but the only way to get a future 
back from that is to use the attach method and I've no idea how to get that 
into my flow. Any example how to do this?

Here's what I'm working with currently:

val connection = StreamTcp().outgoingConnection(address)
val handler: Flow[ByteString, ByteString] = 
Flow[ByteString].mapConcat(frameReconciler.reconcile)

.map(decodeRequest).map(handleRequest).map(encodeResponse).map(createFrame)
connection.handleWith(handler)

Best regards,
Cole

On Friday, January 23, 2015 at 8:55:32 AM UTC-8, drewhk wrote:
>
> Hi Cole,
>
> The connection disconnect event will be signalled as stream completion to 
> the reading stream and cancellation to the writing stream. For example if 
> you fold over the ouput stream of the TCP connection, on TCP closure 
> (assuming normal close event) the fold element will emit the final result 
> in its corresponding Future.
>
> -Endre
>
> On Fri, Jan 23, 2015 at 10:17 AM, > 
> wrote:
>
>> Hey all,
>>
>> After making a StreamTcp().outgoingConnection(address) connection, how 
>> can I watch the connection for disconnects so that I can establish a new 
>> connection? I see a akka.io.Tcp$ConfirmedClosed$ and 
>> a akka.actor.Terminated getting logged in dead letters after the server 
>> restarts but no idea how I can get ahold of them or if there's a better way.
>>
>> Thanks in advance,
>> Cole
>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] BalancingPool versus RoundRobinPool + BalancingDispatcher

2015-01-23 Thread Roland Kuhn
You’re welcome!

> 23 jan 2015 kl. 15:15 skrev Jean Helou :
> 
> Hi roland, 
>  
> The BalancingDispatcher is a very particular setup: a thread pool of size N 
> with N actors that all pull from the same queue. Thread pool sizes are not 
> changeable at runtime in Akka.
>  
> Thanks for the explanation. I can see why it wouldn't make sense to have a 
> resizable BalancingDispatcher. 
> 
> I have taken good note of your remarks on the code and I will integrate it in 
> my code. 
> 
> However this discussion made me realize that the balancing dispatcher is not, 
> in fact, adapted to my use case. 
>  
> I was hoping to use the work-stealing properties of the balancing dispatcher 
> to avoid having to provide my own custom implementation of the work pulling 
> pattern. 
> I see now that I was mistaken, and will have to roll my own, with the 
> buffering and all. 
> 
> Thanks again for your answers
> 
> Jean
> 
> -- 
> >> Read the docs: http://akka.io/docs/ 
> >> Check the FAQ: 
> >> http://doc.akka.io/docs/akka/current/additional/faq.html 
> >> 
> >> Search the archives: https://groups.google.com/group/akka-user 
> >> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to akka-user@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/akka-user 
> .
> For more options, visit https://groups.google.com/d/optout 
> .



Dr. Roland Kuhn
Akka Tech Lead
Typesafe  – Reactive apps on the JVM.
twitter: @rolandkuhn
 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Roland Kuhn

> 23 jan 2015 kl. 08:39 skrev Johannes Berg :
> 
> Thanks for the answers, this really explains a lot. I will go back to my 
> abyss and rethink some things. See below some answers/comments.
> 
> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
> Hi Johannes,
> 
> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg  > wrote:
> 
> I will try that but it seems that will only help to a certain point and when 
> I push the load further it will hit it again.
> 
> There is no system message traffic between two Akka systems by default, to 
> have a system send system messages to another you either need to use remote 
> deployment or deathwatch on remote actors. Which one are you using? What is 
> the scenario?
> 
> I do use deathwatch on remote actors and the amount of deatchwatches I have 
> is linear to the load I put on the system so that explains increased number 
> of system messages based on load then I guess.
>  
> 
> The main issue is that whatever is the rate of system message delivery we 
> *cannot* backpressure remote deployment or how many watched remote actors 
> die. For any delivery buffer there is a large enough "mass actor extinction 
> event" that will just fill it up. You can increase the buffer size though up 
> to that point where you expect that a burst maximum is present (for example 
> you know the peak number of remote watched actors and the peak rate of them 
> dying).
> 
> Thinking about these features more closely I can see that these things may 
> require acked delivery but I would have expected something that grows 
> indefinately until outofmem like unbounded inboxes. It's not apparent from 
> the documentation about deathwatching that you need to consider some buffer 
> size if you are watching very many actors that may be created or die at a 
> very fast rate (maybe a note about this could be added to the docs?). A quick 
> glance at the feature you don't expect it to be limited by anything else than 
> normal actor message sending and receiving. Furthermore I wouldn't have 
> expected a buffer overflow due to deathwatching would cause a node to get 
> quarantined and removed from the cluster, instead I would expect some 
> deatchwatching to fail to work correctly.

This is a tradeoff that we cannot make in Akka: the core semantics must work 
under all conditions, with clear failure paths. Dropping a Terminated message 
can kill your application on a logical level (in the sense of forever waiting 
for it) and resending manually is not possible, so we cannot possibly drop it. 
The good thing is that we can fabricate this particular message when there is 
need—like a failed remote node—and therefore we do that. But once that has been 
done, the Actor that we said just Terminated must stay dead, it cannot come 
back (because that could also break your program). This is the reason for the 
quarantine. Now the only missing piece is that in a distributed system the only 
reliable measure available for assessing the health of remote nodes is their 
ability to respond to messages, and if one stops for long enough then we must 
draw the appropriate conclusions and perform the consequences.

> Causing the node to go down in case of a buffer overflow seems a bit 
> dangerous considering ddos attacks even though it maybe makes the system 
> behave more consistently.

There is no “ddos” here, your app is effectively killing itself by overloading 
the network. This is something that Akka cannot protect you from.

On a final note, Akka weakens or removes consistency in many places in order to 
achieve scalability and resilience, but the choice of what can be sacrificed is 
not arbitrary, certain aspects cannot be inconsistent because that would make 
programming unreasonable (in all senses of the word).

Regards,

Roland

>  
>  
> 
> I hit this within a minute after I put on the load which is a bit annoying to 
> me. I'm fine with it becoming unreachable as long as I can get it back to 
> reachable when it has crunched through the load.
> 
> That means a higher buffer size. If there is no sysmsg buffer size that can 
> absorb your load then you have to rethink your remote deployment/watch 
> strategy (whichever feature you use).
> 
> Now that I know what's causing the increasing rate of system messages I 
> certainly will rethink my deatchwatch stategy and/or limiting the load based 
> on the configured buffer size.
>  
>  
> Will it still buffer up system messages even though it's unreachable?
> 
> After quarantine there is no system message delivery, everything is dropped. 
> There is no recovery from quarantine that is its purpose. If there is any 
> lost system message between two systems (and here they are dropped due to the 
> buffer being full) then they are in an undefined state, especially with 
> remote deployment, so they quarantine each other. 
> 
> After quarantine I understand there's no system message delivery, but when 
> it's just unreachable it buffers them up, rig

[akka-user] Second ShardRegion not receiving messages in two cluster ShardRegions app

2015-01-23 Thread Jonathan Rowlands
I modified typesafe's activator-akka-cluster-sharding-scala exampl 
e:

https://gist.github.com/jgrowl/028e621d32f2388e876a

Basically I have a Publisher node which has an accompanying redis node that 
watches for messages and then will send it to the publisher. Likewise there 
is a Subscriber that also has a redis watcher.

My code can send messages to my publisherRegion but messages sent to 
subscriberRegion never reach the intended target. I never get any errors, 
the messages just go in silently. What seems to be happening is that 
messages are going into the shardBuffers but a receiveCoordinatorMessage is 
never sent so the messages stay there forever.

If I comment out the publisher node and run the subscriber node (with its 
redis node) then the subscriber works fine in isolation. When using them 
together though only one will receive messages.

I am not sure if what I am trying to do is just outside of the realm of 
expected use cases, if there is a bug somewhere, or if I am just doing 
something wrong.

Any ideas?

Thanks,
Jon

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] ShardRegionTerminated with LocalRef Issue

2015-01-23 Thread Saparbek Smailov
Hello everyone,

In our project we use Cluster Sharding with Cluster Singletons. Our Cluster 
is set up on AWS Elastic Beanstalk. 
The set up works perfectly until you start removing instances from the 
cluster. The issue arises when a shard is recreated using Akka persistence: 
during the recovery we get an error saying "Shard already allocated".

I went through the logs and realised that some of ShardRegionTerminated 
events are persisted using LocalRef.
Let me elaborate on this a little bit:
 - During the recovery I get ShardRegionRegistered event with 
RemoteActorRef.
 - Then I get ShardHomeAllocated event which allocates shard on the region
 - I get ShardRegionTerminated with LocalActorRef, which does not remove 
the Shard and Region from Maps, as RemoteActorRef is not equal to 
LocalActorRef
 - Then I get ShardRegionRegistered with Ref haveing different IP address
 - Finally when next ShardHomeAllocated is replayed I get "Shard already 
allocated" error.

How can I avoid the problem? Is there any way to force Akka use only 
RemoteActorRef or at least serialize using RemoteActorRef? Am I not using 
the library properly?
I would appreciate any help/hint on this issue.

Thank you very much!


-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group. 

 

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses. 

 

Fetch and Sizzle are trading names of Speciality Stores Limited, a member 
of the Ocado Group.

 

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka terminology confusion - Cluster, node, router, routee

2015-01-23 Thread Martynas Mickevičius
Hi Rajesh,

On Wed, Jan 21, 2015 at 12:45 AM, Rajesh Shetty  wrote:

> Here is the layman understanding
>
> Cluster -> network of nodes
> nodes -> Physical or virtual instances on a single hardware boxes.
> Essentially which means you can have more than 1 node on a same physical
> box . which means more than one node can have same IP address but MUST have
> different port.
>
> Now this solves the general understanding of cluster/nodes in a typical
> web app scale.
>
> Does akka follow same convention ?.  Question is when you specify
> following application.conf for seed nodes
>
> seed-nodes = [
>   "akka.tcp://ClusterSystem@127.0.0.1:2551",
>   "akka.tcp://ClusterSystem@127.0.0.1:2552"]
>
> which means it starts 2 nodes , adds it to the cluster. Now the question
> is if have bunch of actors, do they get created for each of the nodes? e.g
> if have created 4 actors as a part of the process, will I have 8 actors of
> type 4 in each nodes ?. is that a right assumption ?
>

Seed node is a node that is going to be contacted to join a cluster. Node
here means an instance of ActorSystem. This list should contain addresses
of the nodes that have already been started.

If you are starting a node that is going to be the first one to form a
cluster, you should include address of itself to the seed node list, so the
node "joins" itself and forms the initial cluster.


>
> Next question is routers and routees
>
> As I understand each of these actors can be a router . ACTOR > Router and
> router can create child actors a.k.a routees.  So lets say if I use above
> example out of 4 , I make 2 actors as routers (not sure how to make them
> router) and each router creates 4 more child actors (routees) . Which means
> in my cluster I will have
>
> Node1 - 4 actors ; 2 routers , 2 regular actors
>  > 2 router creates > 4 child actors (routees ) each. ->
>  Node1 total actors : 8
> Node2- 4 actors ; 2 routers , 2 regular actors
>  > 2 router creates > 4 child actors (routees ) each. ->
>  Node1 total actors : 8
>
> So total actors in a given cluster 16 . is that a right assumption or
> calculation.?
>
> In short Cluster -> (1-n) Nodes  (physical or virtual ) -> (1-1) Routers
> -> (1-n) Routees
>

Router can be either a pool router
, which means
that the router will create and manage its routees, or a group router
, which means
that routees have already been created and such a router will use paths to
route messages across to routees.

Routers do not count as a regular actors. Routing logic is implemented in
the ActorRef itself

.


>
>
>
>
>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is the best way to start Akka cluster nodes ?

2015-01-23 Thread Martynas Mickevičius
Hi Rajesh,

have you tried sbt-native-packager
 plugin? Using this sbt
plugin you can package up standard applications that define static main
method or apps that implement subclass of Bootable

.

Either of these can be packaged to various formats - from simple zip
archives to docker containers or debian packages.

On Wed, Jan 21, 2015 at 5:58 AM, Rajesh Shetty  wrote:

> What is the best way to start Akka cluster nodes?.
> as a
> > microkernel (bootable )
> > using java -jar
> > anything else ?
>
> in either case what should be the code structure
>
> *Microkernel*
>
> > Actors
> > bootable extension (for microkernel) -> that creates ActorSystem
> > What else?
>
> *> using java -jar *
> > Actors
> > ?
>
>
> Any examples of these techniques we have , that I can refer ?
>
>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster setup not sending messages to other nodes

2015-01-23 Thread Rajesh Shetty
Thanks Martynas, That makes it clear . I just wanted to make sure since I 
assumed by having setup cluster it will by default perform the load 
balancing work on the incoming messages

So seeing up akka.cluster.ClusterActorRefProvider , Just creates bunch of 
nodes in a cluster setup where they know about each other but they don't 
talk to each other unless you make them talk , e.g, using Cluster Aware 
Routers to route messages.


-Rajesh
 

On Friday, January 23, 2015 at 9:31:19 AM UTC-8, Martynas Mickevičius wrote:
>
> Hi Rajesh,
>
> you are explicitly sending messages to /user/HelloActor actor running on 
> hellokernel@127.0.0.1:2551. This is encoded in the path you provide to 
> actorSelection. No message reaches another node, because you are sending 
> every message to a first node.
>
> You will need to setup a Cluster Aware Router 
> 
>  to 
> route messages across actors running on multiple nodes.
>
> On Thu, Jan 22, 2015 at 12:25 AM, Rajesh Shetty  > wrote:
>
>> I have setup cluster of 2 nodes using following config
>>
>> akka {
>>
>> log-dead-letters = 10
>>
>>   log-dead-letters-during-shutdown = on
>>
>>   actor {
>>
>> provider = "akka.cluster.ClusterActorRefProvider"
>>
>>   }
>>
>>   remote {
>>
>> log-remote-lifecycle-events = off
>>
>> netty.tcp {
>>
>>   hostname = "127.0.0.1"
>>
>>   port = 0
>>
>> }
>>
>>   }
>>
>>
>>   cluster {
>>
>>   min-nr-of-members = 2
>>
>> seed-nodes = [
>>
>>   "akka.tcp://hellokernel@127.0.0.1:2551",
>>
>>   "akka.tcp://hellokernel@127.0.0.1:2552"]
>>
>>
>> auto-down-unreachable-after = 10s
>>
>>   }
>>
>> }
>>
>>
>>
>> public static class HelloActor extends UntypedActor {
>>
>> final ActorRef worldActor = getContext().actorOf(Props.create
>> (WorldActor.class));
>>
>>
>> public void onReceive(Object message) {
>>
>>   if (message.equals("start"))
>>
>>   {
>>
>> worldActor.tell("Hello", getSelf());
>>
>>   }
>>
>>   else if (message instanceof String)
>>
>>   {
>>
>> String s =String.format("Received message '%s' from '%s'", message, 
>> getSender().toString());
>>
>> System.out.println(s);
>>
>> try
>>
>> {
>>
>> //intentional sleep to emulate nodex - HelloActor to be 
>> busy.
>>
>> Thread.sleep(1);
>>
>> }
>>
>> catch(Exception e)
>>
>> {
>>
>> e.printStackTrace();
>>
>> System.out.println("exception while thread sleeping");
>>
>> }
>>
>> System.out.println("HelloActor on current node done processing.."
>> );
>>
>>   }
>>
>>   else
>>
>> unhandled(message);
>>
>> }
>>
>>   }
>>
>> public static class WorldActor extends UntypedActor {
>>
>> public void onReceive(Object message) {
>>
>>   if (message instanceof String)
>>
>> getSender().tell(((String) message).toUpperCase() + " world!",
>>
>> getSelf());
>>
>>   else
>>
>> unhandled(message);
>>
>> }
>>
>>   }
>>
>>
>>
>> and each of the node is just simple microkernel bootable hello actor and 
>> world actor. 
>>
>>
>> Im calling akka cluster from the Play app. I'm calling leader node (the 
>> one runnning on 2551) using following code multiple times every second , it 
>> keeps sending message to node1 and not doing work distribution to node2 
>> since node1 actor is still busy (10 secs sleep)
>>
>>
>>
>>
>> public Result testAkka(String name)
>>
>> {
>>
>> ActorSystem actorSystem = ActorSystem.create( "hellokernel" );
>>
>> ActorSelection myActor = actorSystem.actorSelection("akka.tcp://
>> hellokernel@127.0.0.1:2551/user/HelloActor" );
>>
>> myActor.tell(name, null);
>>
>> return Results.ok("message sent");
>>
>>  }
>>
>>
>>
>>
>> So my question is 
>>
>>
>> > for balancing of load/messages to happen - do we have to setup routers 
>> or is it a default functionality of cluster 
>>
>> > if answer is yes (we do have to setup router) how do I do it?
>>
>> > if answer is no (means default cluster leader should send message to 
>> node2 , if node1 is busy) , why is it above example not working.
>>
>>
>>
>>
>>
>>
>>
>>
>>  -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Martynas Mickevičius
> Typesafe  – Reactive 
> 

Re: [akka-user] ClusterSharding entry persistence

2015-01-23 Thread Martynas Mickevičius
On Wed, Jan 21, 2015 at 5:04 PM, Brice Figureau 
wrote:

> Hi,
>
> I'm trying to implement persistence of entries (as in their presence not
> their data) in a ClusterSharding based system. For the moment I have a
> persistent ClusterSingleton that creates all entries, so that in the
> event of a full cluster restart every running entries would be
> re-created (by sending them an Init message).
>
> The only issue is that (before ClusterSharding) this singleton was
> watching the created child so that it knew when those would disappear
> (and thus know processing is over for this entry).
>
> In the ClusterSharding, is there a way to do that, beside sending the
> Passivate also to my singleton?
>
> Next, I've noticed that Akka master ClusterSharding seems to implement
> what I did with the "rememberEntries" parameter, in certainly a
> clever/better way than my poor-man's implementation. Is this new contrib
> compatible with the latest Akka 2.3?
> How to use this newer version?
>

This feature has been merged
 to
the master branch which is the upcoming 2.4 release in a couple of months.
You can give it a spin using snapshot
 version
of Akka. Documentation on this can also be found on the cluster sharding
page
.


>
> And finally, is there a way to roughly get access to the list of running
> entries (their ids for instance) in a given cluster? Or at least to ask
> the local Region for the entries running on a given node?
> This is so that I can provide this information to a management layer we
> have on our system.
>

This new feature also enables to start a PersistentView that would get
updates of entry starts and stops in the shard. Entry state changes are
persisted by the Shard persistent actor

.


>
> Thanks!
> --
> Brice Figureau
> My Blog: http://www.masterzen.fr/
>
> --
> >>  Read the docs: http://akka.io/docs/
> >>  Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>  Search the archives:
> https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster setup not sending messages to other nodes

2015-01-23 Thread Martynas Mickevičius
Hi Rajesh,

you are explicitly sending messages to /user/HelloActor actor running on
hellokernel@127.0.0.1:2551. This is encoded in the path you provide to
actorSelection. No message reaches another node, because you are sending
every message to a first node.

You will need to setup a Cluster Aware Router

to
route messages across actors running on multiple nodes.

On Thu, Jan 22, 2015 at 12:25 AM, Rajesh Shetty  wrote:

> I have setup cluster of 2 nodes using following config
>
> akka {
>
> log-dead-letters = 10
>
>   log-dead-letters-during-shutdown = on
>
>   actor {
>
> provider = "akka.cluster.ClusterActorRefProvider"
>
>   }
>
>   remote {
>
> log-remote-lifecycle-events = off
>
> netty.tcp {
>
>   hostname = "127.0.0.1"
>
>   port = 0
>
> }
>
>   }
>
>
>   cluster {
>
>   min-nr-of-members = 2
>
> seed-nodes = [
>
>   "akka.tcp://hellokernel@127.0.0.1:2551",
>
>   "akka.tcp://hellokernel@127.0.0.1:2552"]
>
>
> auto-down-unreachable-after = 10s
>
>   }
>
> }
>
>
>
> public static class HelloActor extends UntypedActor {
>
> final ActorRef worldActor = getContext().actorOf(Props.create
> (WorldActor.class));
>
>
> public void onReceive(Object message) {
>
>   if (message.equals("start"))
>
>   {
>
> worldActor.tell("Hello", getSelf());
>
>   }
>
>   else if (message instanceof String)
>
>   {
>
> String s =String.format("Received message '%s' from '%s'", message,
> getSender().toString());
>
> System.out.println(s);
>
> try
>
> {
>
> //intentional sleep to emulate nodex - HelloActor to be
> busy.
>
> Thread.sleep(1);
>
> }
>
> catch(Exception e)
>
> {
>
> e.printStackTrace();
>
> System.out.println("exception while thread sleeping");
>
> }
>
> System.out.println("HelloActor on current node done processing.."
> );
>
>   }
>
>   else
>
> unhandled(message);
>
> }
>
>   }
>
> public static class WorldActor extends UntypedActor {
>
> public void onReceive(Object message) {
>
>   if (message instanceof String)
>
> getSender().tell(((String) message).toUpperCase() + " world!",
>
> getSelf());
>
>   else
>
> unhandled(message);
>
> }
>
>   }
>
>
>
> and each of the node is just simple microkernel bootable hello actor and
> world actor.
>
>
> Im calling akka cluster from the Play app. I'm calling leader node (the
> one runnning on 2551) using following code multiple times every second , it
> keeps sending message to node1 and not doing work distribution to node2
> since node1 actor is still busy (10 secs sleep)
>
>
>
>
> public Result testAkka(String name)
>
> {
>
> ActorSystem actorSystem = ActorSystem.create( "hellokernel" );
>
> ActorSelection myActor = actorSystem.actorSelection("akka.tcp://
> hellokernel@127.0.0.1:2551/user/HelloActor" );
>
> myActor.tell(name, null);
>
> return Results.ok("message sent");
>
>  }
>
>
>
>
> So my question is
>
>
> > for balancing of load/messages to happen - do we have to setup routers
> or is it a default functionality of cluster
>
> > if answer is yes (we do have to setup router) how do I do it?
>
> > if answer is no (means default cluster leader should send message to
> node2 , if node1 is busy) , why is it above example not working.
>
>
>
>
>
>
>
>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Feedback on Akka Streams 1.0-M2 Documentation

2015-01-23 Thread Endre Varga
Hi Sam,

On Fri, Jan 23, 2015 at 12:02 AM, Sam Halliday 
wrote:

> On Wednesday, 21 January 2015 08:21:15 UTC, drewhk wrote:
> >> I believe it may be possible to use the current 1.0-M2 to address
> >> my bugbear by using the Actor integration to write an actor that
> >> has N instances behind a router, but it feels hacky to have to
> >> use an Actor at all. What is really missing is a Junction that
> >> multiplies the next Flow into N parallel parts that run on
> >> separate threads.
> >
> >
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-cookbook.html#Balancing_jobs_to_a_fixed_pool_of_workers
>
> I actually missed this when reading the docs... it's a gem buried
> in a sea of examples! :-) Is there anything like this in the "top
> down" style documentation?
>

Currently no, the documentation is about explaining the elements, the
cookbook is a list of examples/patterns that can be used directly or
modified, or just as a source of inspiration. But yes, we should link from
the main pages into the relevant cookbook sections.


>
> A convenience method to set up this sort of thing is exactly what
> I mean. I should imagine that fanning out a Flow for
> embarrasingly parallel processing is a common enough pattern that
> one would want to be able to do this in a one liner. You note
> something about work in this area later on (quoted out of order):
>

If you take that recipe you have a one-liner :) Our main philosophy is to
not put overly many combinators prepackaged instead encourage flexible use
of them. It is about giving a fish or teaching how to fish :) The idea is
that if certain patterns seems to be widely used we promote them to be
library provided combinators.


>
> > In the future we will allow users to add explicit markers where
> > the materializer needs to cut up chains/graphs into concurrent
> > entities.
>
> This sounds very good. Is there a ticket I can subscribe to for
> updates? Is there a working name for the materializer so that I
> know what to look out for?
>

Not really, there are multiple tickets. The ActorBasedFlowMaterializer has
an Optimizations parameter which is currently not documented. It will
eagerly collapse entities into synchronous ones as much as possible, but
currently there is no API to add boundaries to this collapse procedure
(e.g. you have two map stages that you *do* want to keep conccurrent and
pipelined). Also it cannot collapse currently graph elements,
stream-of-stream elements, mapAsync and the timed elements.

Also, remember that this is about pipelining which is different from the
parallellization demonstrated in the cookbook.


>
>
> > Also, you can try mapAsync or mapAsyncUnordered for similar
> > tasks.
>
> It would be good to read some discussion about these that goes
> further than the API docs. Do they just create a Future and
> therefore have no way to tell a fast producer to slow down?
>

A mapAsync/mapAsyncUnordered will create these futures in a bounded number
at any time and emit their result once they are completed one-by-one once
the downstream is able to consume them. Once this happened a new Future is
created. So at any given time there are a bounded number of uncompleted
futures. In other words Future completion is the backpressure signal to the
upstream.


> How
> does one achieve pushback from these? Pushback on evaluation of
> the result is essential, not on the ability to create/schedule
> the futures. I would very like to see some documentation that
> explains where this has an advantage over plain 'ol Scala Stream
> with a map{Future{...}}.
>

Doing the above on a Scala Stream will create arbitrarily many Futures, and
it will not wait for the result of those Futures. Our mapAsync on the other
hand waits for the result of the Futures and emits those (i.e. it is a
flattening operation), and only keeps a limited number of open Futures at
any given time.


>
>
> >> In general, I felt that the documentation was missing any
> >> commentary on concurrency and parallelisation. I was left
> >> wondering what thread everything was happening on.
> >
> > ... as the default all stream processing element is backed by
> >  an actor ...
>
> The very fact that each component is backed by an Actor is news
> to me. This wasn't at all obvious from the docs and actually
> the "integration with actors" section made me think that streams
> must be implemented completely differently if it needs an
> integration layer!
>

Internals might or might not be actors. This is all depending on what
materializer is used (currently there is only one kind), and even currently
we have elements not backed by an Actor (CompletedSource and friends for
example). This is completely internal stuff, we don't want to document all
aspects of it. But yes, we can add more high-level info about it.


> Actually, the "actor integration" really
> means "low level streaming actors", doesn't it?
>

I am not sure what this means.


> I would strongly
> recommend ma

Re: [akka-user] TCP Outgoing Connection Lifecycle

2015-01-23 Thread Endre Varga
Hi Cole,

The connection disconnect event will be signalled as stream completion to
the reading stream and cancellation to the writing stream. For example if
you fold over the ouput stream of the TCP connection, on TCP closure
(assuming normal close event) the fold element will emit the final result
in its corresponding Future.

-Endre

On Fri, Jan 23, 2015 at 10:17 AM,  wrote:

> Hey all,
>
> After making a StreamTcp().outgoingConnection(address) connection, how can
> I watch the connection for disconnects so that I can establish a new
> connection? I see a akka.io.Tcp$ConfirmedClosed$ and
> a akka.actor.Terminated getting logged in dead letters after the server
> restarts but no idea how I can get ahold of them or if there's a better way.
>
> Thanks in advance,
> Cole
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] [akka-stream] Periodic source of sources

2015-01-23 Thread Martynas Mickevičius
There was a discussion on a similar but not completely exact topic that you
described here .

Since you do not want to produce ticks at all if the previous source is
still not complete, I think you will have to fallback to ActorPublisher and
implement it.

On Thu, Jan 22, 2015 at 9:11 PM, Boris Lopukhov <89bo...@gmail.com> wrote:

> Hi all!
>
> How can i do a Source[Source[String]] that periodically emit the
> Source[String] with the specified interval, but only if the previous
> Source[String] is complete?
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] [akka-stream] Periodic source of sources

2015-01-23 Thread Endre Varga
On Fri, Jan 23, 2015 at 5:45 PM, Martynas Mickevičius <
martynas.mickevic...@typesafe.com> wrote:

> There was a discussion on a similar but not completely exact topic that
> you described here .
>

umm, no, that is slightly different if I understood the proposal correctly.


>
> Since you do not want to produce ticks at all if the previous source is
> still not complete, I think you will have to fallback to ActorPublisher and
> implement it.
>
> On Thu, Jan 22, 2015 at 9:11 PM, Boris Lopukhov <89bo...@gmail.com> wrote:
>
>> Hi all!
>>
>> How can i do a Source[Source[String]] that periodically emit the
>> Source[String] with the specified interval, but only if the previous
>> Source[String] is complete?
>>
>
Can you specify this further? Also I would be happy to know the actual
use-case because nesting streams is usually a road to much pain if not done
carefully so I would investigate first if there is a graph alternative for
the use-case.

-Endre



>
>>  --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Martynas Mickevičius
> Typesafe  – Reactive
>  Apps on the JVM
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Running with akka-remote: ClassNotFound

2015-01-23 Thread Martynas Mickevičius
Hi Ashesh,

how are you running this application?

If your are packaging it manually, are you sure that akka-* jars are in the
classpath?

On Thu, Jan 22, 2015 at 12:51 PM, Ashesh Ambasta 
wrote:

> The setup is quite basic:
>
> Here's my application.conf
> akka {
>   actor {
> provider = "akka.remote.RemoteActorRefProvider"
>   }
>   remote {
> enabled-transports = ["akka.remote.netty.tcp"]
> netty.tcp {
>   hostname = "127.0.0.1"
>   port = 2552
> }
>   }
> }
>
>
> And my build.sbt;
> name := "random-app"
>
>
> version := "1.0"
>
>
> scalaVersion := "2.11.5"
>
>
> libraryDependencies ++= Seq(
>   "com.typesafe.akka" %% "akka-actor" % "2.3.8",
>   "com.typesafe.akka" %% "akka-remote" % "2.3.8",
>   "centralapp-core" %% "centralapp-core" % "1.0",
>   "io.spray" %% "spray-can" % "1.3.2",
>   "io.spray" %% "spray-json" % "1.3.2",
>   "io.spray" %% "spray-client" % "1.3.2"
> )
>
>
> However, when trying to run the application, I get the following exception:
> Exception in thread "main" java.lang.ClassNotFoundException: akka.remote.
> RemoteActorRefProvider
> Why?
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: [akka-stream] Junctions documentation and specs

2015-01-23 Thread Endre Varga
On Fri, Jan 23, 2015 at 7:02 AM, Alexey Romanchuk <
alexey.romanc...@gmail.com> wrote:

> Hey!
>
> Also there is not straightforward how backpressure works with Broadcast.
> What if one of the outputs is busy and another one is requesting for for
> new elements?
>

It will backpressure and adapt the rate for the slower consumer. Usually
all elements are assumed to be non-dropping and properly backpressured
unless stated otherwise, but I guess it is better to document this
explicitly. You might want to look at this recipe to change this behavior,
too:
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-cookbook.html#Dropping_broadcast

-Endre


> As far as I understand from sources (Broadcast and
> outputBunch.AllOfMarkedOutputs) broadcast will emit new elements to
> downstream only when all substreams demand for new element. Is it correct?
> I think we should add this to docs.
>
> Thanks!
>
>
> среда, 21 января 2015 г., 9:01:00 UTC+6 пользователь Alexey Romanchuk
> написал:
>
>> Hi again, hakkers!
>>
>> I can not find any documentation about conditions when juntions becomes
>> completed and how junctions propagate errors. Something like "Merge
>> completes when all of his input completes". Also there is no such cases in
>> unit tests.
>>
>> Am I miss something? Sure this information can be found in sources, but I
>> think it should be documentation.
>>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Inability to route through a proxy.

2015-01-23 Thread Martynas Mickevičius
Upcoming Akka 2.4 has a NAT (container) traversal support
 implemented.

Until that you can run spark container with --net=host flag to start
container that uses host network interface. However this has the limitation
that all containers need to be running on the same host.

On Thu, Jan 22, 2015 at 11:51 PM, jay vyas 
wrote:

> To add some color.
>
> 1) When we run w/ -Dakka.remote.untrusted-mode=on, we see dropping
> message [class akka.actor.ActorSelectionMessage] for *unknown* recipient
>
> [Actor[akka.tcp://sparkMaster@10.254.230.67:7077/]] arriving at
> [akka.tcp://sparkMaster@10.254.230.67:7077] inbound addresses are
> [akka.tcp://sparkMaster@spark-master:7077]
>
> 2) When we run w/ -Dakka.remote.untrusted-mode=off, we see dropping
> message [class akka.actor.ActorSelectionMessage] for *non-local *recipient
>
>
> [Actor[akka.tcp://sparkMaster@10.254.118.158:7077/]] arriving at
> [akka.tcp://sparkMaster@10.254.118.158:7077] inbound addresses are
> [akka.tcp://sparkMaster@spark-master:7077]
>
> On Thursday, January 22, 2015 at 4:00:42 PM UTC-5, Tim St. Clair wrote:
>>
>> Greetings folks -
>>
>> I'm currently trying to run Spark master through a proxy and receiving an
>> error that I can't seem to bypass.
>>
>> ERROR EndpointWriter: dropping message [class 
>> akka.actor.ActorSelectionMessage]
>> for non-local recipient [Actor[akka.tcp://sparkMaster@
>> 10.254.118.158:7077/]] arriving at [akka.tcp://sparkMaster@10.
>> 254.118.158:7077] inbound addresses are [akka.tcp://sparkMaster@spark-
>> master:7077]
>>
>> The spark-master is running inside a container which is on a 192.168
>> subnet, but all traffic from the slaves are routed via iptables through a
>> load-balanced proxy 10.254.118.158.
>>
>> Is there any easy was to disable what appears to be IP validation?
>>
>> Cheers,
>> Tim
>>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to serialize an Any?

2015-01-23 Thread Martynas Mickevičius
Hi Kevin,

do you want to serialize your custom Value Classes? In such case add
Serializable trait to the value class, and write your serializer for that.

All the built in scala types that extend AnyVal (Int, Double, Short, ...)
 are serialized and sent through the wire as their java.lang counterparts.



On Thu, Jan 22, 2015 at 11:46 PM, Kevin Esler  wrote:

> I am puzzled by something:  Akka messages can be of type Any, yet
> akka.seriailziation.Serialization.serialize() required an AnyRef as the
> type of the object to be serialized.
>
> So how does Akka serialize a message that is not an AnyRef?
>
> (Reason for asking: I plan to use Akka serialization and would like to
> apply it to values that may not be AnyRef.(
>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] ClusterClient load balancing

2015-01-23 Thread Martynas Mickevičius
Hi Hengky,

what do you mean by load balancing? As you have already discovered if
several actors are registered to the same path, a message for that path
will by sent to randomly picked actor. This is random load balancing.

If you need more control, you can join the frontend node to the cluster and
use PubSub (which is actually used to implement ClusterClient) or Cluster
Sharding.

On Fri, Jan 23, 2015 at 9:20 AM, Hengky Sucanda 
wrote:

> Hi Guys,
>
> I'm currently developing an application using Play and Akka, and currently
> is deploying one frontend nodes with multiple backend nodes. The frontend
> node connects to the backend using ClusterClient. My question is, does
> ClusterClient have load balancing mechanism so it can properly distribute
> the message to other backend nodes? From what i have read in the docs, it
> says that ClusterClient will send it to a random destination in case of a
> multiple entries matching the path. Is there a way for me to load balance
> the frontend and backend message sending if i am using ClusterClient?
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] What is the best practice enforce Type Check for Actor during compilation

2015-01-23 Thread Martynas Mickevičius
Hi Sean,

a good practice to define messages that actor handles in actor's companion
object. This is of course not enforced by compiler in anyway, but it is
good code convention to follow.

If you want more compiler help, you may want to check out Typed Actors
 or if you want
something fresh and exciting you can take a look at akka-typed
 which is still in PR form but is
going to address the issue you are facing.



On Fri, Jan 23, 2015 at 5:21 AM, Sean Zhong  wrote:

> Suppose we have a servie actor:
>
> class Service extends Actor {
>   def receive: Receive = {
> case RequestA(args) => doSomething()
> case RequestB(args) => doAnotherThing()
>   }
> }
>
> // Some client is using RequestA indirectly, like this:
>
>
> class Source(request: RequestA)
>
>
> A client is trying to sending RequestA to service
> class Client(source: Source) {
>   def query: Unit = {
> service ? source.request
>   }
> }
>
> Then one day, Service decide to NO longer support RequestA.
> The above code will still compile as the client doesn't know the Service
> has changed the interface, and will still send the invalid command.
>
> Do you have recommended coding practice to follow, or tools to use, so
> that it is easir to identity and track this kind of errors?
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Martynas Mickevičius
Typesafe  – Reactive
 Apps on the JVM

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Can not get distributed pub-sub extension to work...

2015-01-23 Thread Martynas Mickevičius
Hi Thiago,

have you seen the activator template "Akka Clustered PubSub with Scala!"
 which
is linked from that documentation page?

On Fri, Jan 23, 2015 at 12:52 AM, Thiago Souza 
wrote:

> Hello all,
>
> I'm getting a hard time trying to get the samples at
> http://doc.akka.io/docs/akka/snapshot/contrib/distributed-pub-sub.html to
> work.
>
> I've implemented the Subscriber and the Publisher as described in the
> article and created 2 systems with the following configurations:
>
> *seed ActorSystem (where Publisher is registered):*
> *akka {*
> *  extensions = ["akka.contrib.pattern.DistributedPubSubExtension"]*
> *  actor {*
> *provider = "akka.cluster.ClusterActorRefProvider"*
> *  }*
> *  remote {*
> *log-remote-lifecycle-events = on*
> *netty.tcp {*
> *  hostname = "127.0.0.1"*
> *  port = 2551*
> *}*
> *  }*
> *  cluster {*
> *seed-nodes = ["akka.tcp://ClusterSystem@127.0.0.1:2551
> "]*
> *auto-down-unreachable-after = 10s*
> *  }*
> *}*
>
> *worker ActorSystem (where Subscriber is registered):*
> *akka {*
> *  extensions = ["akka.contrib.pattern.DistributedPubSubExtension"]*
> *  actor {*
> *provider = "akka.cluster.ClusterActorRefProvider"*
> *  }*
> *  remote {*
> *log-remote-lifecycle-events = on*
> *netty.tcp {*
> *  hostname = "127.0.0.1"*
> *  port = 0*
> *}*
> *  }*
> *  cluster {*
> *seed-nodes = ["akka.tcp://ClusterSystem@127.0.0.1:2551
> "]*
> *auto-down-unreachable-after = 10s*
> *  }*
> *}*
>
> But after sending "hello world" to Publisher nothing is printed. Here
> is the full log:
>
> *[INFO] [01/22/2015 20:43:59.631] [main] [Remoting] Starting remoting*
> *[INFO] [01/22/2015 20:43:59.771] [main] [Remoting] Remoting started;
> listening on addresses :[akka.tcp://ClusterSystem@127.0.0.1:2551
> ]*
> *[INFO] [01/22/2015 20:43:59.772] [main] [Remoting] Remoting now listens
> on addresses: [akka.tcp://ClusterSystem@127.0.0.1:2551
> ]*
> *[INFO] [01/22/2015 20:43:59.782] [main] [Cluster(akka://ClusterSystem)]
> Cluster Node [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] - Starting up...*
> *[INFO] [01/22/2015 20:43:59.847] [main] [Cluster(akka://ClusterSystem)]
> Cluster Node [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] - Registered cluster JMX MBean
> [akka:type=Cluster]*
> *[INFO] [01/22/2015 20:43:59.847] [main] [Cluster(akka://ClusterSystem)]
> Cluster Node [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] - Started up successfully*
> *[INFO] [01/22/2015 20:43:59.850]
> [ClusterSystem-akka.actor.default-dispatcher-4]
> [Cluster(akka://ClusterSystem)] Cluster Node
> [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] - Metrics will be retreived from
> MBeans, and may be incorrect on some platforms. To increase metric accuracy
> add the 'sigar.jar' to the classpath and the appropriate platform-specific
> native libary to 'java.library.path'. Reason:
> java.lang.ClassNotFoundException: org.hyperic.sigar.Sigar*
> *[INFO] [01/22/2015 20:43:59.851]
> [ClusterSystem-akka.actor.default-dispatcher-4]
> [Cluster(akka://ClusterSystem)] Cluster Node
> [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] - Metrics collection has started
> successfully*
> *[INFO] [01/22/2015 20:43:59.859]
> [ClusterSystem-akka.actor.default-dispatcher-14]
> [Cluster(akka://ClusterSystem)] Cluster Node
> [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] - Node
> [akka.tcp://ClusterSystem@127.0.0.1:2551
> ] is JOINING, roles []*
> *[INFO] [01/22/2015 20:43:59.881] [main] [Remoting] Starting remoting*
> *[INFO] [01/22/2015 20:43:59.888] [main] [Remoting] Remoting started;
> listening on addresses :[akka.tcp://ClusterSystem@127.0.0.1:54670
> ]*
> *[INFO] [01/22/2015 20:43:59.888] [main] [Remoting] Remoting now listens
> on addresses: [akka.tcp://ClusterSystem@127.0.0.1:54670
> ]*
> *[INFO] [01/22/2015 20:43:59.889] [main] [Cluster(akka://ClusterSystem)]
> Cluster Node [akka.tcp://ClusterSystem@127.0.0.1:54670
> ] - Starting up...*
> *[INFO] [01/22/2015 20:43:59.891] [main] [Cluster(akka://ClusterSystem)]
> Cluster Node [akka.tcp://ClusterSystem@127.0.0.1:54670
> ] - Started up successfully*
> *[INFO] [01/22/2015 20:43:59.892]
> [ClusterSystem-akka.actor.default-dispatcher-4]
> [Cluster(akka://ClusterSystem)] Cluster Node
> [akka.tcp://ClusterSystem@127.0.0.1:54670
> ] - Metrics will be retreived from
> MBeans, and may

Re: [akka-user] BalancingPool versus RoundRobinPool + BalancingDispatcher

2015-01-23 Thread Jean Helou
Hi roland,


> The BalancingDispatcher is a very particular setup: a thread pool of size
> N with N actors that all pull from the same queue. Thread pool sizes are
> not changeable at runtime in Akka.
>

Thanks for the explanation. I can see why it wouldn't make sense to have a
resizable BalancingDispatcher.

I have taken good note of your remarks on the code and I will integrate it
in my code.

However this discussion made me realize that the balancing dispatcher is
not, in fact, adapted to my use case.

I was hoping to use the work-stealing properties of the balancing
dispatcher to avoid having to provide my own custom implementation of the
work pulling pattern.
I see now that I was mistaken, and will have to roll my own, with the
buffering and all.

Thanks again for your answers

Jean

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Patrik Nordwall
Johannes, I think you have some very good points regarding the
documentation. Would you mind creating an issue
?

Thanks,
Patrik

On Fri, Jan 23, 2015 at 2:10 PM, Patrik Nordwall 
wrote:

>
>
> On Fri, Jan 23, 2015 at 2:09 PM, Caoyuan  wrote:
>
>>
>>
>> On Fri, Jan 23, 2015 at 7:06 PM, Patrik Nordwall <
>> patrik.nordw...@gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Jan 23, 2015 at 10:12 AM, Caoyuan  wrote:
>>>
 As per our experience on spray-socketio project, too many remote actor
 watching will cause the cluster quarantined very quickly.

 The default heartbeat interval for remote watching is:

 akka.remote {

watch-failure-detector {

  heartbeat-interval = 1 s

  threshold = 10.0

  acceptable-heartbeat-pause = 10 s

  unreachable-nodes-reaper-interval = 1s

  expected-response-after = 3 s

}

 }

 i.e, 1 second. Thus, when "I do use deathwatch on remote actors and
 the amount of deatchwatches I have is linear to the load", the amount
 of heartbeats that is sent per seconds could be mass.

>>>
>>> No, the number of heartbeat messages per seconds are ***NOT***
>>> influenced by how many actors you watch. The heartbeats are for monitoring
>>> the connection between two nodes.
>>>
>>
>> I looked the code again, Patrik is right, the heartbeats are sending
>> between nodes only. I'll reconsider the cause that we encountered before,
>> to see what happened when too many actors are remote watched.
>>
>> Thanks for the pointing out of my mistake .
>>
>
> no worries, thanks for sharing your observations
> /Patrik
>
>
>>
>>
>>> Also, note that akka.remote.watch-failure-detector is not used between
>>> nodes in the same cluster. That is akka.cluster.failure-detector
>>>
>>>
>>>
 You can try to increase the 'heartbeat-interval' to 10, 20, 30, or
 even 300... seconds to see if it can resolve your problem. But remember
 that 'akka.remote.heartbeat-interval' is a globe setting, so the
 better way is to write a custom heartbeat based remote death watching.

 /Caoyuan

 On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg 
 wrote:

> Thanks for the answers, this really explains a lot. I will go back to
> my abyss and rethink some things. See below some answers/comments.
>
> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
>>
>> Hi Johannes,
>>
>> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg 
>> wrote:
>>
>>>
>>> I will try that but it seems that will only help to a certain point
>>> and when I push the load further it will hit it again.
>>>
>>
>> There is no system message traffic between two Akka systems by
>> default, to have a system send system messages to another you either need
>> to use remote deployment or deathwatch on remote actors. Which one are 
>> you
>> using? What is the scenario?
>>
>
> I do use deathwatch on remote actors and the amount of deatchwatches I
> have is linear to the load I put on the system so that explains increased
> number of system messages based on load then I guess.
>
>
>>
>> The main issue is that whatever is the rate of system message
>> delivery we *cannot* backpressure remote deployment or how many watched
>> remote actors die. For any delivery buffer there is a large enough "mass
>> actor extinction event" that will just fill it up. You can increase the
>> buffer size though up to that point where you expect that a burst maximum
>> is present (for example you know the peak number of remote watched actors
>> and the peak rate of them dying).
>>
>
> Thinking about these features more closely I can see that these things
> may require acked delivery but I would have expected something that grows
> indefinately until outofmem like unbounded inboxes. It's not apparent from
> the documentation about deathwatching that you need to consider some 
> buffer
> size if you are watching very many actors that may be created or die at a
> very fast rate (maybe a note about this could be added to the docs?). A
> quick glance at the feature you don't expect it to be limited by anything
> else than normal actor message sending and receiving. Furthermore I
> wouldn't have expected a buffer overflow due to deathwatching would cause 
> a
> node to get quarantined and removed from the cluster, instead I would
> expect some deatchwatching to fail to work correctly. Causing the node to
> go down in case of a buffer overflow seems a bit dangerous considering 
> ddos
> attacks even though it maybe makes the system behave more consistently.
>
>
>>
>>
>>>
>>> I hit this within a minute after I put on the load which is a bit
>>> annoying to me. I'm fine wi

Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Patrik Nordwall
On Fri, Jan 23, 2015 at 2:09 PM, Caoyuan  wrote:

>
>
> On Fri, Jan 23, 2015 at 7:06 PM, Patrik Nordwall <
> patrik.nordw...@gmail.com> wrote:
>
>>
>>
>> On Fri, Jan 23, 2015 at 10:12 AM, Caoyuan  wrote:
>>
>>> As per our experience on spray-socketio project, too many remote actor
>>> watching will cause the cluster quarantined very quickly.
>>>
>>> The default heartbeat interval for remote watching is:
>>>
>>> akka.remote {
>>>
>>>watch-failure-detector {
>>>
>>>  heartbeat-interval = 1 s
>>>
>>>  threshold = 10.0
>>>
>>>  acceptable-heartbeat-pause = 10 s
>>>
>>>  unreachable-nodes-reaper-interval = 1s
>>>
>>>  expected-response-after = 3 s
>>>
>>>}
>>>
>>> }
>>>
>>> i.e, 1 second. Thus, when "I do use deathwatch on remote actors and the
>>> amount of deatchwatches I have is linear to the load", the amount of
>>> heartbeats that is sent per seconds could be mass.
>>>
>>
>> No, the number of heartbeat messages per seconds are ***NOT*** influenced
>> by how many actors you watch. The heartbeats are for monitoring the
>> connection between two nodes.
>>
>
> I looked the code again, Patrik is right, the heartbeats are sending
> between nodes only. I'll reconsider the cause that we encountered before,
> to see what happened when too many actors are remote watched.
>
> Thanks for the pointing out of my mistake .
>

no worries, thanks for sharing your observations
/Patrik


>
>
>> Also, note that akka.remote.watch-failure-detector is not used between
>> nodes in the same cluster. That is akka.cluster.failure-detector
>>
>>
>>
>>> You can try to increase the 'heartbeat-interval' to 10, 20, 30, or even
>>> 300... seconds to see if it can resolve your problem. But remember that
>>> 'akka.remote.heartbeat-interval' is a globe setting, so the better way
>>> is to write a custom heartbeat based remote death watching.
>>>
>>> /Caoyuan
>>>
>>> On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg 
>>> wrote:
>>>
 Thanks for the answers, this really explains a lot. I will go back to
 my abyss and rethink some things. See below some answers/comments.

 On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
>
> Hi Johannes,
>
> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg 
> wrote:
>
>>
>> I will try that but it seems that will only help to a certain point
>> and when I push the load further it will hit it again.
>>
>
> There is no system message traffic between two Akka systems by
> default, to have a system send system messages to another you either need
> to use remote deployment or deathwatch on remote actors. Which one are you
> using? What is the scenario?
>

 I do use deathwatch on remote actors and the amount of deatchwatches I
 have is linear to the load I put on the system so that explains increased
 number of system messages based on load then I guess.


>
> The main issue is that whatever is the rate of system message delivery
> we *cannot* backpressure remote deployment or how many watched remote
> actors die. For any delivery buffer there is a large enough "mass actor
> extinction event" that will just fill it up. You can increase the buffer
> size though up to that point where you expect that a burst maximum is
> present (for example you know the peak number of remote watched actors and
> the peak rate of them dying).
>

 Thinking about these features more closely I can see that these things
 may require acked delivery but I would have expected something that grows
 indefinately until outofmem like unbounded inboxes. It's not apparent from
 the documentation about deathwatching that you need to consider some buffer
 size if you are watching very many actors that may be created or die at a
 very fast rate (maybe a note about this could be added to the docs?). A
 quick glance at the feature you don't expect it to be limited by anything
 else than normal actor message sending and receiving. Furthermore I
 wouldn't have expected a buffer overflow due to deathwatching would cause a
 node to get quarantined and removed from the cluster, instead I would
 expect some deatchwatching to fail to work correctly. Causing the node to
 go down in case of a buffer overflow seems a bit dangerous considering ddos
 attacks even though it maybe makes the system behave more consistently.


>
>
>>
>> I hit this within a minute after I put on the load which is a bit
>> annoying to me. I'm fine with it becoming unreachable as long as I can 
>> get
>> it back to reachable when it has crunched through the load.
>>
>
> That means a higher buffer size. If there is no sysmsg buffer size
> that can absorb your load then you have to rethink your remote
> deployment/watch strategy (whichever feature you use).
>

 Now that I know what's causing the

Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Caoyuan
On Fri, Jan 23, 2015 at 7:06 PM, Patrik Nordwall 
wrote:

>
>
> On Fri, Jan 23, 2015 at 10:12 AM, Caoyuan  wrote:
>
>> As per our experience on spray-socketio project, too many remote actor
>> watching will cause the cluster quarantined very quickly.
>>
>> The default heartbeat interval for remote watching is:
>>
>> akka.remote {
>>
>>watch-failure-detector {
>>
>>  heartbeat-interval = 1 s
>>
>>  threshold = 10.0
>>
>>  acceptable-heartbeat-pause = 10 s
>>
>>  unreachable-nodes-reaper-interval = 1s
>>
>>  expected-response-after = 3 s
>>
>>}
>>
>> }
>>
>> i.e, 1 second. Thus, when "I do use deathwatch on remote actors and the
>> amount of deatchwatches I have is linear to the load", the amount of
>> heartbeats that is sent per seconds could be mass.
>>
>
> No, the number of heartbeat messages per seconds are ***NOT*** influenced
> by how many actors you watch. The heartbeats are for monitoring the
> connection between two nodes.
>

I looked the code again, Patrik is right, the heartbeats are sending
between nodes only. I'll reconsider the cause that we encountered before,
to see what happened when too many actors are remote watched.

Thanks for the pointing out of my mistake .


> Also, note that akka.remote.watch-failure-detector is not used between
> nodes in the same cluster. That is akka.cluster.failure-detector
>
>
>
>> You can try to increase the 'heartbeat-interval' to 10, 20, 30, or even
>> 300... seconds to see if it can resolve your problem. But remember that
>> 'akka.remote.heartbeat-interval' is a globe setting, so the better way
>> is to write a custom heartbeat based remote death watching.
>>
>> /Caoyuan
>>
>> On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg 
>> wrote:
>>
>>> Thanks for the answers, this really explains a lot. I will go back to my
>>> abyss and rethink some things. See below some answers/comments.
>>>
>>> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:

 Hi Johannes,

 On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg 
 wrote:

>
> I will try that but it seems that will only help to a certain point
> and when I push the load further it will hit it again.
>

 There is no system message traffic between two Akka systems by default,
 to have a system send system messages to another you either need to use
 remote deployment or deathwatch on remote actors. Which one are you using?
 What is the scenario?

>>>
>>> I do use deathwatch on remote actors and the amount of deatchwatches I
>>> have is linear to the load I put on the system so that explains increased
>>> number of system messages based on load then I guess.
>>>
>>>

 The main issue is that whatever is the rate of system message delivery
 we *cannot* backpressure remote deployment or how many watched remote
 actors die. For any delivery buffer there is a large enough "mass actor
 extinction event" that will just fill it up. You can increase the buffer
 size though up to that point where you expect that a burst maximum is
 present (for example you know the peak number of remote watched actors and
 the peak rate of them dying).

>>>
>>> Thinking about these features more closely I can see that these things
>>> may require acked delivery but I would have expected something that grows
>>> indefinately until outofmem like unbounded inboxes. It's not apparent from
>>> the documentation about deathwatching that you need to consider some buffer
>>> size if you are watching very many actors that may be created or die at a
>>> very fast rate (maybe a note about this could be added to the docs?). A
>>> quick glance at the feature you don't expect it to be limited by anything
>>> else than normal actor message sending and receiving. Furthermore I
>>> wouldn't have expected a buffer overflow due to deathwatching would cause a
>>> node to get quarantined and removed from the cluster, instead I would
>>> expect some deatchwatching to fail to work correctly. Causing the node to
>>> go down in case of a buffer overflow seems a bit dangerous considering ddos
>>> attacks even though it maybe makes the system behave more consistently.
>>>
>>>


>
> I hit this within a minute after I put on the load which is a bit
> annoying to me. I'm fine with it becoming unreachable as long as I can get
> it back to reachable when it has crunched through the load.
>

 That means a higher buffer size. If there is no sysmsg buffer size that
 can absorb your load then you have to rethink your remote deployment/watch
 strategy (whichever feature you use).

>>>
>>> Now that I know what's causing the increasing rate of system messages I
>>> certainly will rethink my deatchwatch stategy and/or limiting the load
>>> based on the configured buffer size.
>>>
>>>


> Will it still buffer up system messages even though it's unreachable?
>

 After quarantine ther

Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Patrik Nordwall
On Fri, Jan 23, 2015 at 12:34 PM, Johannes Berg  wrote:

> Did you forget a NOT there? Did you mean "No, the number of heartbeat
> messages per seconds are NOT influenced by how many actors you watch."?
>

Indeed, thanks!


>
> Increasing akka.remote.system-message-buffer-size to 1 did solve the
> problem for the load I'm pushing at the system now.
>
> On Friday, January 23, 2015 at 1:06:44 PM UTC+2, Patrik Nordwall wrote:
>>
>>
>>
>> On Fri, Jan 23, 2015 at 10:12 AM, Caoyuan  wrote:
>>
>> As per our experience on spray-socketio project, too many remote actor
>> watching will cause the cluster quarantined very quickly.
>>
>> The default heartbeat interval for remote watching is:
>>
>> akka.remote {
>>
>>watch-failure-detector {
>>
>>  heartbeat-interval = 1 s
>>
>>  threshold = 10.0
>>
>>  acceptable-heartbeat-pause = 10 s
>>
>>  unreachable-nodes-reaper-interval = 1s
>>
>>  expected-response-after = 3 s
>>
>>}
>>
>> }
>>
>> i.e, 1 second. Thus, when "I do use deathwatch on remote actors and the
>> amount of deatchwatches I have is linear to the load", the amount of
>> heartbeats that is sent per seconds could be mass.
>>
>>
>> No, the number of heartbeat messages per seconds are influenced by how
>> many actors you watch. The heartbeats are for monitoring the connection
>> between two nodes.
>>
>> Also, note that akka.remote.watch-failure-detector is not used between
>> nodes in the same cluster. That is akka.cluster.failure-detector
>>
>>
>>
>> You can try to increase the 'heartbeat-interval' to 10, 20, 30, or even
>> 300... seconds to see if it can resolve your problem. But remember that
>> 'akka.remote.heartbeat-interval' is a globe setting, so the better way
>> is to write a custom heartbeat based remote death watching.
>>
>> /Caoyuan
>>
>> On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg  wrote:
>>
>> Thanks for the answers, this really explains a lot. I will go back to my
>> abyss and rethink some things. See below some answers/comments.
>>
>> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
>>
>> Hi Johannes,
>>
>> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg  wrote:
>>
>>
>> I will try that but it seems that will only help to a certain point and
>> when I push the load further it will hit it again.
>>
>>
>> There is no system message traffic between two Akka systems by default,
>> to have a system send system messages to another you either need to use
>> remote deployment or deathwatch on remote actors. Which one are you using?
>> What is the scenario?
>>
>>
>> I do use deathwatch on remote actors and the amount of deatchwatches I
>> have is linear to the load I put on the system so that explains increased
>> number of system messages based on load then I guess.
>>
>>
>>
>> The main issue is that whatever is the rate of system message delivery we
>> *cannot* backpressure remote deployment or how many watched remote actors
>> die. For any delivery buffer there is a large enough "mass actor extinction
>> event" that will just fill it up. You can increase the buffer size though
>> up to that point where you expect that a burst maximum is present (for
>> example you know the peak number of remote watched actors and the peak rate
>> of them dying).
>>
>>
>> Thinking about these features more closely I can see that these things
>> may require acked delivery but I would have expected something that grows
>> indefinately until outofmem like unbounded inboxes. It's not apparent from
>> the documentation about deathwatching that you need to consider some buffer
>> size if you are watching very many actors that may be created or die at a
>> very fast rate (maybe a note about this could be added to the docs?). A
>> quick glance at the feature you don't expect it to be limited by anything
>> else than normal actor message sending and receiving. Furthermore I
>> wouldn't have expected a buffer overflow due to deathwatching would cause a
>> node to get quarantined and removed from the cluster, instead I would
>> expect some deatchwatching to fail to work correctly. Causing the node to
>> go down in case of a buffer overflow seems a bit dangerous considering ddos
>> attacks even though it maybe makes the system behave more consistently.
>>
>>
>>
>>
>>
>> I hit this within a minute after I put on the load which is a bit
>> annoying to me. I'm fine with it becoming unreachable as long as I can get
>> it back to reachable when it has crunched through the load.
>>
>>
>> That means a higher buffer size. If there is no sysmsg buffer size that
>> can absorb your load then you have to rethink your remote deployment/watch
>> strategy (whichever feature you use).
>>
>>
>> Now that I know what's causing the increasing rate of system messages I
>> certainly will rethink my deatchwatch stategy and/or limiting the load
>> based on the configured buffer size.
>>
>>
>>
>>
>> Will it still buffer up system messages even though it's unreachable?
>>
>>
>> After quarantine there is no 

Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Johannes Berg
Did you forget a NOT there? Did you mean "No, the number of heartbeat 
messages per seconds are NOT influenced by how many actors you watch."?

Increasing akka.remote.system-message-buffer-size to 1 did solve the 
problem for the load I'm pushing at the system now.

On Friday, January 23, 2015 at 1:06:44 PM UTC+2, Patrik Nordwall wrote:
>
>
>
> On Fri, Jan 23, 2015 at 10:12 AM, Caoyuan 
> > wrote:
>
> As per our experience on spray-socketio project, too many remote actor 
> watching will cause the cluster quarantined very quickly. 
>
> The default heartbeat interval for remote watching is:
>
> akka.remote {
>
>watch-failure-detector {
>
>  heartbeat-interval = 1 s
>
>  threshold = 10.0
>
>  acceptable-heartbeat-pause = 10 s
>
>  unreachable-nodes-reaper-interval = 1s
>
>  expected-response-after = 3 s
>
>}
>
> }
>
> i.e, 1 second. Thus, when "I do use deathwatch on remote actors and the 
> amount of deatchwatches I have is linear to the load", the amount of 
> heartbeats that is sent per seconds could be mass.
>
>
> No, the number of heartbeat messages per seconds are influenced by how 
> many actors you watch. The heartbeats are for monitoring the connection 
> between two nodes.
>
> Also, note that akka.remote.watch-failure-detector is not used between 
> nodes in the same cluster. That is akka.cluster.failure-detector
>
>
>
> You can try to increase the 'heartbeat-interval' to 10, 20, 30, or even 
> 300... seconds to see if it can resolve your problem. But remember that 
> 'akka.remote.heartbeat-interval' is a globe setting, so the better way is 
> to write a custom heartbeat based remote death watching.
>
> /Caoyuan
>
> On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg  > wrote:
>
> Thanks for the answers, this really explains a lot. I will go back to my 
> abyss and rethink some things. See below some answers/comments.
>
> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
>
> Hi Johannes,
>
> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg  wrote:
>
>
> I will try that but it seems that will only help to a certain point and 
> when I push the load further it will hit it again.
>
>
> There is no system message traffic between two Akka systems by default, to 
> have a system send system messages to another you either need to use remote 
> deployment or deathwatch on remote actors. Which one are you using? What is 
> the scenario?
>
>
> I do use deathwatch on remote actors and the amount of deatchwatches I 
> have is linear to the load I put on the system so that explains increased 
> number of system messages based on load then I guess.
>  
>
>
> The main issue is that whatever is the rate of system message delivery we 
> *cannot* backpressure remote deployment or how many watched remote actors 
> die. For any delivery buffer there is a large enough "mass actor extinction 
> event" that will just fill it up. You can increase the buffer size though 
> up to that point where you expect that a burst maximum is present (for 
> example you know the peak number of remote watched actors and the peak rate 
> of them dying).
>
>
> Thinking about these features more closely I can see that these things may 
> require acked delivery but I would have expected something that grows 
> indefinately until outofmem like unbounded inboxes. It's not apparent from 
> the documentation about deathwatching that you need to consider some buffer 
> size if you are watching very many actors that may be created or die at a 
> very fast rate (maybe a note about this could be added to the docs?). A 
> quick glance at the feature you don't expect it to be limited by anything 
> else than normal actor message sending and receiving. Furthermore I 
> wouldn't have expected a buffer overflow due to deathwatching would cause a 
> node to get quarantined and removed from the cluster, instead I would 
> expect some deatchwatching to fail to work correctly. Causing the node to 
> go down in case of a buffer overflow seems a bit dangerous considering ddos 
> attacks even though it maybe makes the system behave more consistently.
>  
>
>  
>
>
> I hit this within a minute after I put on the load which is a bit annoying 
> to me. I'm fine with it becoming unreachable as long as I can get it back 
> to reachable when it has crunched through the load. 
>
>
> That means a higher buffer size. If there is no sysmsg buffer size that 
> can absorb your load then you have to rethink your remote deployment/watch 
> strategy (whichever feature you use).
>
>
> Now that I know what's causing the increasing rate of system messages I 
> certainly will rethink my deatchwatch stategy and/or limiting the load 
> based on the configured buffer size.
>  
>
>  
>
> Will it still buffer up system messages even though it's unreachable? 
>
>
> After quarantine there is no system message delivery, everything is 
> dropped. There is no recovery from quarantine that is its purpose. If there 
> is any lost system message betwee

Re: [akka-user] [akka-stream] Junctions documentation and specs

2015-01-23 Thread Roland Kuhn
Hi Alexey,

you are right on all counts; would you mind opening a ticket and/or a PR?

Thanks,

Roland

> 23 jan 2015 kl. 07:02 skrev Alexey Romanchuk :
> 
> Hey!
> 
> Also there is not straightforward how backpressure works with Broadcast. What 
> if one of the outputs is busy and another one is requesting for for new 
> elements? As far as I understand from sources (Broadcast and 
> outputBunch.AllOfMarkedOutputs) broadcast will emit new elements to 
> downstream only when all substreams demand for new element. Is it correct? I 
> think we should add this to docs.
> 
> Thanks!
> 
> 
> среда, 21 января 2015 г., 9:01:00 UTC+6 пользователь Alexey Romanchuk написал:
> Hi again, hakkers!
> 
> I can not find any documentation about conditions when juntions becomes 
> completed and how junctions propagate errors. Something like "Merge completes 
> when all of his input completes". Also there is no such cases in unit tests. 
> 
> Am I miss something? Sure this information can be found in sources, but I 
> think it should be documentation.
> 
> -- 
> >> Read the docs: http://akka.io/docs/ 
> >> Check the FAQ: 
> >> http://doc.akka.io/docs/akka/current/additional/faq.html 
> >> 
> >> Search the archives: https://groups.google.com/group/akka-user 
> >> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to akka-user@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/akka-user 
> .
> For more options, visit https://groups.google.com/d/optout 
> .



Dr. Roland Kuhn
Akka Tech Lead
Typesafe  – Reactive apps on the JVM.
twitter: @rolandkuhn
 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] TCP Outgoing Connection Lifecycle

2015-01-23 Thread colestanfield
Hey all,

After making a StreamTcp().outgoingConnection(address) connection, how can 
I watch the connection for disconnects so that I can establish a new 
connection? I see a akka.io.Tcp$ConfirmedClosed$ and 
a akka.actor.Terminated getting logged in dead letters after the server 
restarts but no idea how I can get ahold of them or if there's a better way.

Thanks in advance,
Cole

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Patrik Nordwall
On Fri, Jan 23, 2015 at 10:12 AM, Caoyuan  wrote:

> As per our experience on spray-socketio project, too many remote actor
> watching will cause the cluster quarantined very quickly.
>
> The default heartbeat interval for remote watching is:
>
> akka.remote {
>
>watch-failure-detector {
>
>  heartbeat-interval = 1 s
>
>  threshold = 10.0
>
>  acceptable-heartbeat-pause = 10 s
>
>  unreachable-nodes-reaper-interval = 1s
>
>  expected-response-after = 3 s
>
>}
>
> }
>
> i.e, 1 second. Thus, when "I do use deathwatch on remote actors and the
> amount of deatchwatches I have is linear to the load", the amount of
> heartbeats that is sent per seconds could be mass.
>

No, the number of heartbeat messages per seconds are influenced by how many
actors you watch. The heartbeats are for monitoring the connection between
two nodes.

Also, note that akka.remote.watch-failure-detector is not used between
nodes in the same cluster. That is akka.cluster.failure-detector



> You can try to increase the 'heartbeat-interval' to 10, 20, 30, or even
> 300... seconds to see if it can resolve your problem. But remember that
> 'akka.remote.heartbeat-interval' is a globe setting, so the better way is
> to write a custom heartbeat based remote death watching.
>
> /Caoyuan
>
> On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg  wrote:
>
>> Thanks for the answers, this really explains a lot. I will go back to my
>> abyss and rethink some things. See below some answers/comments.
>>
>> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
>>>
>>> Hi Johannes,
>>>
>>> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg 
>>> wrote:
>>>

 I will try that but it seems that will only help to a certain point and
 when I push the load further it will hit it again.

>>>
>>> There is no system message traffic between two Akka systems by default,
>>> to have a system send system messages to another you either need to use
>>> remote deployment or deathwatch on remote actors. Which one are you using?
>>> What is the scenario?
>>>
>>
>> I do use deathwatch on remote actors and the amount of deatchwatches I
>> have is linear to the load I put on the system so that explains increased
>> number of system messages based on load then I guess.
>>
>>
>>>
>>> The main issue is that whatever is the rate of system message delivery
>>> we *cannot* backpressure remote deployment or how many watched remote
>>> actors die. For any delivery buffer there is a large enough "mass actor
>>> extinction event" that will just fill it up. You can increase the buffer
>>> size though up to that point where you expect that a burst maximum is
>>> present (for example you know the peak number of remote watched actors and
>>> the peak rate of them dying).
>>>
>>
>> Thinking about these features more closely I can see that these things
>> may require acked delivery but I would have expected something that grows
>> indefinately until outofmem like unbounded inboxes. It's not apparent from
>> the documentation about deathwatching that you need to consider some buffer
>> size if you are watching very many actors that may be created or die at a
>> very fast rate (maybe a note about this could be added to the docs?). A
>> quick glance at the feature you don't expect it to be limited by anything
>> else than normal actor message sending and receiving. Furthermore I
>> wouldn't have expected a buffer overflow due to deathwatching would cause a
>> node to get quarantined and removed from the cluster, instead I would
>> expect some deatchwatching to fail to work correctly. Causing the node to
>> go down in case of a buffer overflow seems a bit dangerous considering ddos
>> attacks even though it maybe makes the system behave more consistently.
>>
>>
>>>
>>>

 I hit this within a minute after I put on the load which is a bit
 annoying to me. I'm fine with it becoming unreachable as long as I can get
 it back to reachable when it has crunched through the load.

>>>
>>> That means a higher buffer size. If there is no sysmsg buffer size that
>>> can absorb your load then you have to rethink your remote deployment/watch
>>> strategy (whichever feature you use).
>>>
>>
>> Now that I know what's causing the increasing rate of system messages I
>> certainly will rethink my deatchwatch stategy and/or limiting the load
>> based on the configured buffer size.
>>
>>
>>>
>>>
 Will it still buffer up system messages even though it's unreachable?

>>>
>>> After quarantine there is no system message delivery, everything is
>>> dropped. There is no recovery from quarantine that is its purpose. If there
>>> is any lost system message between two systems (and here they are dropped
>>> due to the buffer being full) then they are in an undefined state,
>>> especially with remote deployment, so they quarantine each other.
>>>
>>
>> After quarantine I understand there's no system message delivery, but
>> when it's just unreachable i

Re: [akka-user] Cluster unreachable and a lot of cluster connections

2015-01-23 Thread Caoyuan
As per our experience on spray-socketio project, too many remote actor
watching will cause the cluster quarantined very quickly.

The default heartbeat interval for remote watching is:

akka.remote {

   watch-failure-detector {

 heartbeat-interval = 1 s

 threshold = 10.0

 acceptable-heartbeat-pause = 10 s

 unreachable-nodes-reaper-interval = 1s

 expected-response-after = 3 s

   }

}

i.e, 1 second. Thus, when "I do use deathwatch on remote actors and the
amount of deatchwatches I have is linear to the load", the amount of
heartbeats that is sent per seconds could be mass.

You can try to increase the 'heartbeat-interval' to 10, 20, 30, or even
300... seconds to see if it can resolve your problem. But remember that
'akka.remote.heartbeat-interval' is a globe setting, so the better way is
to write a custom heartbeat based remote death watching.

/Caoyuan

On Fri, Jan 23, 2015 at 3:39 PM, Johannes Berg  wrote:

> Thanks for the answers, this really explains a lot. I will go back to my
> abyss and rethink some things. See below some answers/comments.
>
> On Thursday, January 22, 2015 at 6:31:01 PM UTC+2, drewhk wrote:
>>
>> Hi Johannes,
>>
>> On Thu, Jan 22, 2015 at 4:53 PM, Johannes Berg  wrote:
>>
>>>
>>> I will try that but it seems that will only help to a certain point and
>>> when I push the load further it will hit it again.
>>>
>>
>> There is no system message traffic between two Akka systems by default,
>> to have a system send system messages to another you either need to use
>> remote deployment or deathwatch on remote actors. Which one are you using?
>> What is the scenario?
>>
>
> I do use deathwatch on remote actors and the amount of deatchwatches I
> have is linear to the load I put on the system so that explains increased
> number of system messages based on load then I guess.
>
>
>>
>> The main issue is that whatever is the rate of system message delivery we
>> *cannot* backpressure remote deployment or how many watched remote actors
>> die. For any delivery buffer there is a large enough "mass actor extinction
>> event" that will just fill it up. You can increase the buffer size though
>> up to that point where you expect that a burst maximum is present (for
>> example you know the peak number of remote watched actors and the peak rate
>> of them dying).
>>
>
> Thinking about these features more closely I can see that these things may
> require acked delivery but I would have expected something that grows
> indefinately until outofmem like unbounded inboxes. It's not apparent from
> the documentation about deathwatching that you need to consider some buffer
> size if you are watching very many actors that may be created or die at a
> very fast rate (maybe a note about this could be added to the docs?). A
> quick glance at the feature you don't expect it to be limited by anything
> else than normal actor message sending and receiving. Furthermore I
> wouldn't have expected a buffer overflow due to deathwatching would cause a
> node to get quarantined and removed from the cluster, instead I would
> expect some deatchwatching to fail to work correctly. Causing the node to
> go down in case of a buffer overflow seems a bit dangerous considering ddos
> attacks even though it maybe makes the system behave more consistently.
>
>
>>
>>
>>>
>>> I hit this within a minute after I put on the load which is a bit
>>> annoying to me. I'm fine with it becoming unreachable as long as I can get
>>> it back to reachable when it has crunched through the load.
>>>
>>
>> That means a higher buffer size. If there is no sysmsg buffer size that
>> can absorb your load then you have to rethink your remote deployment/watch
>> strategy (whichever feature you use).
>>
>
> Now that I know what's causing the increasing rate of system messages I
> certainly will rethink my deatchwatch stategy and/or limiting the load
> based on the configured buffer size.
>
>
>>
>>
>>> Will it still buffer up system messages even though it's unreachable?
>>>
>>
>> After quarantine there is no system message delivery, everything is
>> dropped. There is no recovery from quarantine that is its purpose. If there
>> is any lost system message between two systems (and here they are dropped
>> due to the buffer being full) then they are in an undefined state,
>> especially with remote deployment, so they quarantine each other.
>>
>
> After quarantine I understand there's no system message delivery, but when
> it's just unreachable it buffers them up, right? I think there should be a
> note about this in the documentation about deathwatching and remote
> deployment what buffer may need tweaking and what can happen if the buffer
> is overflown.
>
>
>>
>>
>>> At what rate are system messages typically sent?
>>>
>>
>> They are sent at the rate you are remote deploying or watching actors
>> remotely or at the rate remote watched actors die. On the wire it depends,
>> and user messages share the same TCP connection with the s